Oh, and why bother with that spheregame ^_^ when 3d acceleration hardware can bring stuff like this? - http://bohemiq.scali.eu.org/shadows.jpg
Posted on 2003-11-30 16:24:12 by f0dder
I think most of the nonsense has been aired by now, I originally made a comment that x86 hardware had technical limitations based on the age of its architecture that in turn imposed limitations on how fast you could handle complex data in comparison to modern technology. I also made the comment that directX had it roots in x86 hardware as a matter of historical fact and that it is not a universally available technology for other platforms.

To add to this I made reference to Unix based SGI hardware using OpenGL that is faster that the fastest of Windows x86 based directX.

Now its question time. :tongue:

1. What Microsoft Windows system on any hardware that it runs on, is faster than a SGI box running Unix and OpenGL. Comparison here is current to currect hardware.

2. How many x86 processors do you need to strap together to emulate this style of performance and which Windows version and directX will it run on ?

3. How do you strap together a number of x86 processors so that they run a current Taiwanese AGP terror video card and why would you bother ?

4. What is the percent processor power loss with each additional x86 processor added to the cluster ?

5. When will directX become a mature industrial standard like OpenGL amd when will it become stable in the face of multiple releases over a short time ?

6. When will directX be available for the sum total of unix based systems running non x86 hardware ?

7. Which release version of Microsoft Windows is capable of running any of the clustered supercomputers in the list you posted earlier ?

8. How many years will it take for Microsoft Window operating systems and the directX that depends on it to catch up with existing current "here today" SGI hardware running OpenGL on unix ?

Until you can answer these questions, I address the original posting I made about the architecture age of x86 and the viability of directX in the same way, there has been, there is and there will be better hardware/software available for high end graphics than x86, Microsoft Windows and directX.

Do we wait until an x86 processor is 1024 bit capable at some stage in the future ? Should design engineers in processors impose the bus limits, memory address limits, instruction range limits on the new stuff ?

It may feel good piddling around with the low end of the market and the latest flavour of the second accelerated graphics card hot off the press from Taiwan while kidding yourself that this is profound leading edge technology but until Microsoft Window is capable of running high end hardware to support directX, you are deluding yourself.

To return your original wisecrack, when are you going to admit that you don't know what you are talking about.

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-11-30 17:56:28 by hutch--
1. With the same 3D Graphics Accelerator, I'd say "any windows+DX"
2. Leave DX out of this question, as Bruce's cluster stuff had nowt to do with DX. But add "what is the total cost of the cluster?"
3. Not relevant. If you're clustering for graphics, you're doing off-line graphics rendering, which has nowt to do with the graphics card - nor GL/DX.
4. Huh? Rephrase - what exactly do you mean? Memory bandwidth in a cluster with multi-cpu cells? Or?
5. DX is not widespread, but still runs on multiple architectures. It has better hardware support+control than DX, no worse performance on the same GPUs, and is rather stable - plus has full backwards compatibility because COM is what COM is.
6. Irrelevant to the questions raised in this thread, but the unix people really ought to look apart from religious issues... objectively, DX is the better API. Have you ever looked at either?
7. I can't answer that as I haven't looked at the list, but a guess would be a 64bit version of XP for Itanium2.
8. Catch up with? As for now, it's GL that has to catch up with DX. Please don't confuse proprietary offline rendering APIs with neither GL nor DX.

Fun that you still keep rambling about other hardware and can't answer simple questions. While it's not relevant for my DX/GL question in any way, the whole architecture crap can be seen as a performance/price thing. I don't know where x86 rates compared to the rest - and (in the context of this thread) I don't care. It's the DX/GL APIs I'm interested in... and not as much the performance, as both APIs should be able to leverage the same performance (at the cost of more code and heuristics in the GL driver code).

So, hutch, what are you talking about? Realtime hardware accelerated 3D rendering, or the type of 3D rendering used in movies? The first is related to the GL/DX debate, the second to the whole cluster thing... please don't confuse the two. And what about answering some questions?
Posted on 2003-11-30 18:11:58 by f0dder
f0dder,

On the box you use its basically your choice as to using OpenGL or directX. OpenGL is certainly a lot more stable as it has been around longer and directX suffers the same problem with anything current in Microsoft in that it changes on almost a daily basis but if you can keep up with the additional releases and technical changes, probably directX will do the job for you fine.

Where this is a problem is in getting your dirextX software to work on anything but a machine that always has the latest version of directX and the right hardware to take advantage of it.

I gather that as of about version 8.1, existing software ran slower because of the increase in code size to support the later accelerated video cards and while it worked fine for the later cards, it was actually slower on earlier stuff that earlier version of directX.

Now if you are selling software that requires the vesion of directX that you assume, what you need to do is a redistribution deal with Microsoft so that you can carry the upgrade with your software.

This is where you may have reason to use OpenGL if its capacity fits with your software design as its stability is a winner.

Now as far as your comments about price to performance ratio within a personal user scale of operations, I see nothing wrong with that at all. You may not be familiar with the terminology but thats what the "Bang for you buck" is about. How much performance do you need and for how much money.

The type of high end stuff that hits at a bargain at somewhere over 100 grand US is purchased by Government departments and large corporations who have a usae for it, not personal users playing games. Its something like having an OC48 wired to your PIV, you just could not use the bandwidth and if it was even partially used to its capacity, your PC would stop as it has no way of dealing with that quantity of information. The BUZZ term is scaling.

Regards,

http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd


Los Alamos National Laboratory Selects World's Most Powerful Advanced Visualization from SGI

"This is the largest, most powerful visualization system in the world," said Shawn Underwood, director of marketing, Visual Systems Group, SGI. "This Onyx4 system will be capable of powering over 120 megapixels of screen area and has a fill rate of over 40 gigapixels per second, which is enough pixels per second to put a new image on the average screen of every computer in the world nearly 5000 times a day."

http://www.sgi.com/newsroom/press_releases/2003/july/lanl.html

Just weeks after attaining record levels of sustained performance and scalability on a 256-processor global shared-memory SGI? Altix? 3000 system, the team at NASA Ames doubled the size of its Altix? system-achieving 512 processors in a single image, by far the largest supercomputer ever to run on the Linux? operating system. (NASA announced its technical feat at the SC2003 supercomputing conference.) NASA's effort is part an intra-agency collaborative research program between NASA Ames, JPL and NASA's Goddard Space Flight Center to accelerate the science return for large-scale earth modeling problems.

http://www.sgi.com/features/2003/nov/nasa/index.html

I wonder why they are not using Windows and directX in these application, is it fast enough to do the job ? Isn't it a cheaper solution to use cheap clustered x86 processors ?
Posted on 2003-11-30 19:24:59 by hutch--
Oh, but you have misunderstood a lot of things hutch :)

COM brings some additional clock cycles of overhead - compared to the whole code path (and even by itself), it's not much - an indirect code call, pft. Even in if coded in pure assembly, considering what the API is doing, this overhead is not much. OpenGL remains full (as far as I know) backwards compatibility - this is the cause of the 'bloat' - which mean it's actually rather stable. While remaining stable, it still offers (almost) immediate access to the latest 3d hardware features on a wide range of hardware - something OpenGL doesn't. And this is in cooperation with the largest 3d hardware vendors around... It's not microsoft despotically deciding the d3d API.


Where this is a problem is in getting your dirextX software to work on anything but a machine that always has the latest version of directX and the right hardware to take advantage of it.

Please make the distincion between "professional" (aka "we're leazy and use the EASIEsT api") and "most interesting" API. And between realtime and offline rendering. Again, neither GL nor DX would do particular well for offline rendering.


I gather that as of about version 8.1, existing software ran slower because of the increase in code size to support the later accelerated video cards and while it worked fine for the later cards, it was actually slower on earlier stuff that earlier version of directX.

It might run worse on old hardware, but have you watched noncompliant GL implementations run later GL titles? You get less than one fps :)


This is where you may have reason to use OpenGL if its capacity fits with your software design as its stability is a winner.

Yes, and your vendor and architecture and OS-dependant (let me elaborate if you doubt those statements on a "portable API") GL API will run a bit better than generic D3D code... just look at doom, carmack only had to write, what was, it, at least three backends to cover "the most" video cards there. And the nvidia path only had to drop "some" accuracy, which is "acceptable" because doom3 is as dark as it is.
Posted on 2003-11-30 19:37:32 by f0dder


It's now. And under developing.
Let me not to compare the software of one man and 300$+ hardware nVidia cards.
Are geforce cards able to do such REALTIME "ugly shading"?
NOT. They can't do raytracing. Just accelerate Wolf 3D technology + some advantages + some... + ...
Let's try to compare power of CPU and GPU.
And remember Moore low.
What about GPU evolution? :grin:

I remember first 3dfx game Turok. (of cource revolution, on P200mmx)
Nowadays, imho, games are not so much better than it was.

Well, when we get 4*A64 3G+ CPU, will be flexible software rendering worse than flat triangles rendered with SLOW ~500MHz GeForce GPU?

What i wanted to say?
I remember SoundBlaster for 200$.
How mach does sound card cost now?
And where on the M/B it is located :grin:
~0,5$ for connectors and AC97 codec.
Coz, if we are going to use 400$ speakers we'll buy 200$ SoundCard. Just for quality of sound. No more.

People who will code for 64bit CPU could kill mopdern gfx cards.
But the other side, GPU vendors have marketing department...
I'd like to see a realtime NURBS engine. :grin:
Posted on 2003-11-30 19:46:02 by bitRAKE
BitRAKE, something being able to handle NURBS would be truly interesting - then again, those should be managable realtime rather well as triangulated meshes? Not that realtime raycasting or nurbs (more managable I guess?) wouldn't be nice, but triangles seem to be way easier to code hardware logic for, and they seem to perform reasonably well, both performance and quality wise :)
Posted on 2003-11-30 19:50:30 by f0dder
f0dder, today is truely a virtual triangle world. Hypothetically, tomorrow could be a virtual NURBS world. The geometry transfered across the bus would be greatly lessened. Combine this with parametric texturing and realtime changes to a realtime world becomes easier. IMHO, technology will migrate towards an optimal abstraction layer between the CPU and GPU. The GPU has many transformations to make before this is realized.
Posted on 2003-11-30 20:48:22 by bitRAKE
I originally made a comment that x86 hardware had technical limitations based on the age of its architecture that in turn imposed limitations on how fast you could handle complex data in comparison to modern technology.


Yes, but you have yet to actually name any of these technical limitations. Until you can name any, I don't see why I should just take your statement for granted, especially considering the fact that there are some very powerful x86-based supercomputers around, and things like google and Pixar have adopted it. Do these not handle complex data?

I also made the comment that directX had it roots in x86 hardware as a matter of historical fact and that it is not a universally available technology for other platforms.


That's not entirely right. Firstly, there are non-x86 based implementations. Secondly, you went much further... You implied that DirectX somehow was linked so closely to x86, that other platforms would require an x86 emulator to implement DirectX.
Besides, you never took the trouble of mentioning exactly where and how it would be linked to x86 so closely, so I don't see why I should just take your statement for granted.

1. How about these HP workstations? They run Windows XP with DirectX, and they have more grunt than the SGI workstations, you'll love them, they don't use x86, but Itanium2, the fastest CPU available atm:
http://www.hp.com/workstations/itanium/index.html

2. In case you didn't know, Windows does support clustering, on both x86 and Itanium2 systems. And as f0dder already said, DirectX is not relevant there. It is available though.

3. You can get motherboards for multiple Xeon or Opteron CPUs. And clustering is as simple as hooking the boxes up to a network, and setting up the OS properly.

4. Hard to say, as it depends on many factors... How many CPUs per motherboard, how is the cluster configured, is the task at hand suitable for efficient clustering? What we DID see, however, is that x86 clusters scale pretty much linearly compared to the SGI Altix cluster, remember? So I don't think we need to worry about it too much.

5. Well, in the Windows-world, DirectX is pretty much the standard anyway. By far most Windows games use DirectX. And more and more 'professional' software now has a Direct3D option next to the traditional OpenGL one. Since Windows is more widespread than all other OSes put together, I don't even think the other OSes matter.

6. Most *nix systems are not aimed at graphics in the first place, so I don't think it's very high on their agenda anyway. You can already run DirectX applications through WineX on x86-based *nix flavours though, and if you consider OS X an *nix system, there's another one that can run DirectX applications then.

7. There ya go:
http://www.microsoft.com/windows2000/hpc/perform.asp
"In 2000, for the first time a Windows-based cluster from the National Center for Supercomputing Applications (NCSA) became a part of the Top 500 list for the first time. Top500 Supercomputer Sites is a list of the 500 most powerful computer systems in the world. The computers are ranked by their performance achieved with the LINPACK , a distinguished professor of the University of Tennessee Knoxville , introduced the LINPACK benchmark. The LINPACK benchmark involves solving a dense system of linear equations. For the Top 500, a version of the benchmark is used that allows the user to scale the size of the problem and to optimize the software in order to achieve the best performance for a given machine."

"The Transaction Processing Performance Council (TPC) is an independent body that administers performance benchmarks for database performance. Its TPC-C benchmark is the current standard for measuring database systems. The latest results from the TPC show Microsoft SQL Server running on Windows 2000-based servers (both clustered and stand-alone) dominating the field on the TPC-C benchmark. Microsoft SQL Server on Windows 2000 took first place and four of the top five places in comparisons of pure performance. The best-performing UNIX system came in sixth. When compared on the basis of price/performance, Microsoft SQL Server on either the Windows NT? or Windows 2000 operating system took all top ten spots."

http://www.microsoft.com/windows2000/hpc/default.asp
"One of the most important computing trends to emerge in the new millennium is the movement away from large servers and monolithic supercomputers to clustered solutions. Clusters of inexpensive servers, specifically Intel-based servers running Windows 2000, are setting records for performance, scalability, and reliability."

http://www.microsoft.com/windows2000/hpc/supercom.asp
"Yesterday?s supercomputer is today?s personal computer. By every measure a modern PC, even a sub-thousand-dollar model, has the attributes of the largest systems of the mid-1980s. Those twenty-year-old systems did the breakthrough work necessary to create today?s infrastructure, manufacturing processes, buildings, bridges, airframes, engines, and drugs, while also optimizing natural-resource harvesting, weapons, and more. Today?s PC, very much the larger systems' equal, sits on a desk, or even inside a briefcase."

"The raw performance and capacity of a 700-megahertz (MHz) processor, 1 gigabyte (GB) RAM, 100 GB of disk space, with three-dimensional (3-D) graphics accelerators can be seen in the benchmarks presented at www.specbench.org , where there are few, if any, competitors (at any price) with the x86-compatible processors and the Intergraph video cards."

"For example, 3-D graphics for computer-aided design/computer-aided manufacturing (CAD/CAM) used to be a specialty business, but now it?s being driven by the market in 3-D games, where a 0 solution now runs circles around implementations that cost ,000, even 0,000, just a few years ago."

So, don't take my word for it :)

8. Catch up with? As for now, it's GL that has to catch up with DX. Please don't confuse proprietary offline rendering APIs with neither GL nor DX.

Anyway, these questions merely tried to establish a lack of x86 and DirectX in large-scale computing, it seems.
This does not say anything about their ability for large-scale computing ofcourse.

there is and there will be better hardware/software available for high end graphics than x86, Microsoft Windows and directX.


Probably, yes, that depends on the situation. Besides, good is good enough. You don't always need the best system to get your work done. You just need a good system. And a good, but cheap system is nice. And that's what x86 is. You want to make us believe that only the best is good enough, and everything else is useless. That's rubbish ofcourse.

To return your original wisecrack, when are you going to admit that you don't know what you are talking about.


I think we all know who the one is that doesn't know what he's talking about by now.

On the box you use its basically your choice as to using OpenGL or directX. OpenGL is certainly a lot more stable as it has been around longer and directX suffers the same problem with anything current in Microsoft in that it changes on almost a daily basis but if you can keep up with the additional releases and technical changes, probably directX will do the job for you fine.


DirectX is fully backward-compatible. If you install DirectX 9, all previous versions will still work aswell. So if you don't want to use the latest version, don't, the old stuff will still work on all machines anyway.
And OpenGL is not 'stable', I prefer the term 'outdated'. How do I use my ps1.x hardware in OpenGL? Oh wait, SGI doesn't have that kind of hardware anyway, so it must not be important.

I gather that as of about version 8.1, existing software ran slower because of the increase in code size to support the later accelerated video cards and while it worked fine for the later cards, it was actually slower on earlier stuff that earlier version of directX.


That depends on the situation, does it not? DirectX 8 offered new features for more performance on current hardware, such as texture compression... Then again, it also offered features like shaders, which obviously will have to be emulated on older hardware, which is slower.
But if you aim at the same hardware as DirectX 7, DirectX 8 should be as fast, or faster.

Now if you are selling software that requires the vesion of directX that you assume, what you need to do is a redistribution deal with Microsoft so that you can carry the upgrade with your software.


The redistributable DirectX9 installer comes with the (free) SDK, and may freely be redistributed (hence the name) by any developer, without any additional 'deals'.

This is where you may have reason to use OpenGL if its capacity fits with your software design as its stability is a winner.


Not exactly. There are different versions of OpenGL aswell, and if you don't have the version required, the software won't work properly either. Now here's the catch: OpenGL is part of your display driver. The only way for a developer to distribute the latest version of OpenGL, is to include the latest display drivers for all hardware available on the platform.
And then we haven't even touched on the vendor-specific extensions yet... I've seen plenty of OpenGL applications that only run on nvidia-cards, because of it. This can never happen in DirectX.

I wonder why they are not using Windows and directX in these application, is it fast enough to do the job ? Isn't it a cheaper solution to use cheap clustered x86 processors ?


They can choose only one type of CPU, can they not? Just because they chose an Itanium2-powered system, doesn't mean that it COULDN'T be powered by any other CPU. Why don't you ask them what steered their decision?
Also, They may not use DirectX, but they most probably do not use OpenGL either, so I don't see the point.

PS: ATi cards can already render 'n-patches' in DirectX. This is a form of NURBS.


PPS: What technical problems does x86 have, that make it not suitable for anything larger than PCs (except that it does quite well in the supercomputer arena?), and why would DirectX not suit larger systems, and why would OpenGL?
Posted on 2003-12-01 03:34:24 by Bruce-li
Another waffling wonder. Like it or not the big boys and the linux brigade DON'T use it.


there are non-x86 based implementations.

Problem IS that the directX system WAS developed for 32 bit windows first and thats where its architectural assumptions lie.

I have yet to hear the count of the supercomputers on the list you posted that run Windowes and directX. Is the answer so simple that they DON'T run Windows and directX ?


Since Windows is more widespread than all other OSes put together, I don't even think the other OSes matter.

Yes but only at the bottom end of the scale, you have postured that dirextX is better technology but its not represented at the top end where OpenGL is.

Most *nix systems are not aimed at graphics in the first place

Unless its an SGI box that runs and supports linux.

Besides, good is good enough.

Yes but only if it IS good enough. If you want high end performance you don't do it with kids toys, you use SGI and OpenGL or some of the other big system with similar performance.

You want to make us believe that only the best is good enough, and everything else is useless. That's rubbish ofcourse.

Your ignorance of scaling hardware to fit a demand is truly overpowering. Feeding a kid some billions of pixels a second for a game would truly be overkill but trying to do the same where its needed with x86 is a delusion.

Probably, yes, that depends on the situation.

Shame it took you so long to realise this as it would have saved everyone the pile of crap you have been going on about.

I give up on trying to connect the hardware problems in x86, even though the rest of the computing world have known it for years. Bus limits that have been tweaked twice to unbung one of the big bottlenecks. Memory address range, the AGP data transfer limit thanks to Mirno, the limitations in instruction performance as a consequence of some instructions being given priority over others. This aspect may be too technical for you but if you optimise assembler for a PIV and need it to run OK on earlier stuff, you need to know it.

And OpenGL is not 'stable', I prefer the term 'outdated'.

Outdated depends on your theory, many of its users see it as superior technology that runs on a wider range of hardware, particularly when they are not commited to Microsoft operating systems.

I quoted this data for a reason,

"This Onyx4 system will be capable of powering over 120 megapixels of screen area and has a fill rate of over 40 gigapixels per second

This scale of power is beyond Windows/directX performance and funny enough, the people who are ordering it tend to know what they are doing with hardware to do specific jobs like the one they have in mind. Much the same can be said about the NASA/JPL system in the works.

What you seem to have missed was the 4th item on the supercomputer list at the site it referred to. The use of playstation hardware clustered to map the universe. Because of its dedicated image processing features and the general performance, it is shaping up to be a very powerful system, particuarly if they can find a way to stream it without too much loss.

If you had have argued that x86 is adequate for domestic to lower end commercial image manipulation using directX, you would have had many people agree with you, cheap cheerful and does the job most of the time but you have insisted that it is a competitor for high end usage and this is nonsense.

Now to add another point for you which may be a little beyond your stated comprehension range, there is a fundamental difference between vertical computer usage and horizontal usage. In days of old, Seymore Cray built supercomputers that were just plain faster than the rest at performing certain tasks. This is vertical computer usage where application like thousands of terminals attached to an old mainframe was horizontal usage.

Strap thousands of computers together by itself does not give you vertical capacity unless there is an efficient way to stream the outputs together to yield higher veretical output. Like it or not the distributed model that you have accepted as being a supercomputer is best demonstrated by the internet, millions of computers strapped together working of a cross access network wired around the world.

I wonder what the terraflop count would be on this supercomputer ?

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-12-01 05:26:27 by hutch--
Problem IS that the directX system WAS developed for 32 bit windows first and thats where its architectural assumptions lie.


So you keep saying. I ask yet again: can you name any of these "architectural assumptions" and explain how they limit implementations on other systems?

I have yet to hear the count of the supercomputers on the list you posted that run Windowes and directX. Is the answer so simple that they DON'T run Windows and directX ?


Firstly, you seem to want to tie supercomputing to DirectX. There is no relation. DirectX is a hardware abstraction layer, like OpenGL is. Secondly, read my last post, it points out that Windows 2000 indeed powers supercomputers that are on the top500 list. I don't know how many there are on the list in total... Then again, that's hardly relevant is it? Even if there is only one, it is already proof that Windows can indeed be used as a respectable supercomputer platform. So that has already been proven then.
And lastly, why do you want Windows now? I thought initially the statement was about x86, and there are more OSes for x86 than just Windows. And we've seen plenty of x86-powered supercomputers in the top500 list.

Yes but only at the bottom end of the scale, you have postured that dirextX is better technology but its not represented at the top end where OpenGL is.


Firstly, what is 'the top end'? Supercomputers? Erm, they don't use OpenGL in general. Workstations? They can use both.
Secondly, whether something is used at the 'top end' or not, says nothing about how good the technology is, strictly speaking. While you can assume that when it is used at the 'top end', that it probably gets the job done properly, you cannot assume the opposite. This is where your logic is flawed.
I'll give you an example... Say we have 100 mbit networks and 1 gbit networks...
Now, most of the 'older' 'top end' machines will be using 100 mbit networks, because 1 gbit was not around yet.
1 gbit is currently found on standard PCs, but most 'top end' machines are big investments, and they are not updated yet... I suppose you will agree that 1 gbit is the better technology, although it is not used in the 'top end'?

Unless its an SGI box that runs and supports linux.


That's funny... If you ask Linus Torvalds what he aimed linux at, I don't think "high end graphics systems" will be high up on the list.

Yes but only if it IS good enough. If you want high end performance you don't do it with kids toys, you use SGI and OpenGL or some of the other big system with similar performance.


As I said already, we don't use SGI anymore. We upgraded our "high end performance" boxes for Autocad and Pro-Engineer and such to regular PCs now.
Also, I think you are confused with the purpose of OpenGL, and the scale at which it is used. Here's a hint: Workstations.

Your ignorance of scaling hardware to fit a demand is truly overpowering. Feeding a kid some billions of pixels a second for a game would truly be overkill but trying to do the same where its needed with x86 is a delusion.


I don't see the relation between fillrate and x86 anyway. Isn't that an issue for the graphics accelerator?

Shame it took you so long to realise this as it would have saved everyone the pile of crap you have been going on about.


Nice try, but if you re-read my posts, you will see that my stance on this never changed. What really happened, is that your statement shifted so much, that it now fits my stance... Trying to make it sound like you convinced me, is rather silly.

I give up on trying to connect the hardware problems in x86, even though the rest of the computing world have known it for years.


If they are so well-known, why have you not been able to name a single one yet?

the AGP data transfer limit thanks to Mirno


AGP is not an x86-specific bus. It is also used in Apples, and many workstations, including the previously mentioned HP Itanium2 ones.
Besides, you can also build an x86 system without an AGP bus at all... Shall I name the obvious example of XBox here?
Oh, and get this, it can STILL run Windows, even without an AGP bus! Wow!

the limitations in instruction performance as a consequence of some instructions being given priority over others. This aspect may be too technical for you but if you optimise assembler for a PIV and need it to run OK on earlier stuff, you need to know it.


Firstly, again this is not an x86-specific phenomenon... Secondly, we are discussing the state-of-the-art performance of x86, and how easy or hard it is to make P4-code run okay on earlier stuff is not relevant.

Outdated depends on your theory, many of its users see it as superior technology that runs on a wider range of hardware, particularly when they are not commited to Microsoft operating systems.


Do not confuse the range of hardware that an API is implemented on with its technological superiority. There is no relation whatsoever.
Besides, since OpenGL is much older than DirectX, even without politics, it would be very logical that it is more widespread. Actually, it's a sign of its outdated-ness, in a way.
But when we speak of technology, we speak of what the API allows us to do with the hardware, and things like that... So, do you have any technological statements aswell? I am not interested in geographics.

This scale of power is beyond Windows/directX performance and funny enough, the people who are ordering it tend to know what they are doing with hardware to do specific jobs like the one they have in mind. Much the same can be said about the NASA/JPL system in the works.


Excuse me, but you mention some hardware specifications. Windows/DirectX are software, and therefore have no hardware specifications. So you are confusing two things. And since when is fillrate all that matters anyway? If all you want is fillrate, I can point you to a few nice tile rasterizers that suit your appetite well. Let me state the old adage "Work smarter, not harder". This also applies to your PS2 in the face of the XBox by the way.

What you seem to have missed was the 4th item on the supercomputer list at the site it referred to. The use of playstation hardware clustered to map the universe. Because of its dedicated image processing features and the general performance, it is shaping up to be a very powerful system, particuarly if they can find a way to stream it without too much loss.


Since it uses neither x86 nor DirectX or OpenGL to accomplish this, I fail to see the relevance in this particular discussion. Stop sidetracking and stick to the point: technical arguments.

If you had have argued that x86 is adequate for domestic to lower end commercial image manipulation using directX, you would have had many people agree with you, cheap cheerful and does the job most of the time but you have insisted that it is a competitor for high end usage and this is nonsense.


Pixar and google seem to agree with me, among others. Are the people that work there just plain stupid, or is there a point to this?

Strap thousands of computers together by itself does not give you vertical capacity unless there is an efficient way to stream the outputs together to yield higher veretical output. Like it or not the distributed model that you have accepted as being a supercomputer is best demonstrated by the internet, millions of computers strapped together working of a cross access network wired around the world.


You mentioned this before, and it hasn't become any more relevant now than it was then.
There are different types of clusters. The Pixar-cluster for example, lends itself perfectly for extremely realistic rendering. I doubt that the seti@home model would be very good at this.
Then again, you don't have to build clusters from x86 CPUs... Clusters are cheap ways to get lots of computing power.
But the more 'conventional' supercomputer architecture of massive amounts of CPUs with a large shared pool of memory can still be applied to x86 CPUs aswell... So discussing the downsides of clusters is not relevant to the discussion whether or not x86 can be used for supercomputing or graphics.

I wonder what the terraflop count would be on this supercomputer ?


That's TERAFLOP, and if you must know, seti@home has a counter for its flop-capacity online.

But you have yet to answer this question:
What technical problems does x86 have, that make it not suitable for anything larger than PCs (except that it does quite well in the supercomputer arena?), and why would DirectX not suit larger systems, and why would OpenGL?
Posted on 2003-12-01 06:19:29 by Bruce-li

3. You can get motherboards for multiple Xeon or Opteron CPUs. And clustering is as simple as hooking the boxes up to a network, and setting up the OS properly.

Isn't that what they call a beowulf cluster? (a bunch of computers(could ie be any kind of CPUs, from 386 to Xenon's) hooked up in an internal high speed network, where only a few of the nodes have something like a screen or internet access, and Linux / BSD is mostly used as OS)
Posted on 2003-12-01 11:26:25 by scientica
Isn't that what they call a beowulf cluster? (a bunch of computers(could ie be any kind of CPUs, from 386 to Xenon's) hooked up in an internal high speed network, where only a few of the nodes have something like a screen or internet access, and Linux / BSD is mostly used as OS)


I believe that beowulf is the name of a project that enables clustering of regular PCs through regular networks via linux. But the idea is much the same as with other clusters, like the Windows 2000 ones, or the SGI Altix.
Beowulf is extremely low-cost ofcourse. Then again, that doesn't seem to interest hutch-- at all.
Posted on 2003-12-01 11:53:35 by Bruce-li
thats funny, I m getting quite bored of reading this war, but I cant help reading it nonetheless. :)
I just want to know how it turns out .

Idea:

We should make a poll to ask everybody here to vote to decide who , from bruce or hutch, is right. What do you think of this?

I think I know what would be the result, but who knows?maybe I just made my point too early and stick to it now.

(Tonight: Election of
"The One Who Does Not Know What He Is Talkin About, And Still Wants To Prove The Whole World That He Is Right, Thus Further Makin A Fool Of Himself "
)

javascript:smilie(':grin:')

:) :) :)
Posted on 2003-12-01 12:48:16 by HeLLoWorld
I'm afraid many people would vote for Hutch, because he's the "grand old man", and clever with words (like politicians...)
Posted on 2003-12-01 12:50:55 by f0dder
HEY!

I don't know hutch--'s age, but I don't think anyone knows my age either! :)
Posted on 2003-12-01 12:54:47 by Bruce-li
Anyway, I have learnt many things here...
my ignorant self did not know anything aout workstations, supercompus, clusters, and of course clusters of x86

and clusters of PS2...and clusters run by win2K... and features of gl versus dx...

But who am I anyway? just a student who spends its time putting pixels HIMSELF on compus screens using ddraw or vesa2.0... and on crappy x86 assembly!! what a shame!! moreover, in his wildest times, this poor boy plays 3d shooters using the eViL m$ShafT dx api, and he dares enjoy it? baaah! :)

btw, I did know about rtrt, but I thought it relied even heavylier than rasterizers on bsp struct... seems there are fully dynamic environments as well... cool... funny spheres...:) it might be the future, especially for algo complexity reasons... Anyway with this thread I learnt things, and I really laughed my ass off, so ... keep posting! ':grin:'

btw I m still learnin to use those javascript smilies ':grin:'
Posted on 2003-12-01 13:26:24 by HeLLoWorld
no such polls please HelloWorld.

thx
Posted on 2003-12-01 14:32:48 by Hiroshimator
oops... xcuse me...
Posted on 2003-12-01 14:52:19 by HeLLoWorld

I did say a "slightly" more secure platform. But that does not diminish the fact that MS wants to offload the games market to X-Box. If all you're interested in is games it makes no sense to buy a massively expensive PC when an X-Box performs very well and is only a couple of hundred bucks. There will always be games played on the PC but the high quality fast ones will be better placed on the dedicated platform.

For a good example of what I mean about the advantages of an engine driven OS you can look at MMX, when it was introduced there was a large rewrite of 95 giving us 98, nearly two years of porting. That rewrite did not include the GDI, not because there was no way to do it but because the extra speed did not warrant the amount of code that needed rewriting. To this day the GDI makes almost no use of even the simplest hardware acceleration. In an engine driven system, you have only to make the engine capable of using the new feature then all functions (ie the whole OS) becomes enabled at the same time, this type of advantage is noticeable even within the same family of processors.


I get it donkey.
But I still feel.... a little angry, I am sorry I seem to directed to you or somebody here, that was not my intent, I got carried away,
I just feel angry, at them, Microsoft I guess then... Why I spent two years almost full time now, for learning program the pc windows platform with api's,
first in assembler because that was my "native" language, and then c/c++, and now... the next version, 4 years from now,
I would have spent my time for nothing.
I feel .... like I have been tricked or something.
All that time for... like nothing. Because All this time I thought like, yeah, I will have the time to finish this big program of mine, ( with windows api's )
that will give me an income, or at least that will make me the proof that will give me a job. And now I feel there wont be time to finish it, or it wont matter.
Now I feel very discouraged, just because I always trusted in lowlevel coding,
I always wanted to believe I was better of to use the lowerlevel knowledge I had...
... maybe this was to no use at all.
shit.
:(
Posted on 2003-12-01 16:32:25 by david