What the heck are you trying to say, Donkey?

I had assumed that everyone visiting an assembler forum would be familiar with the particular processor incarnations that had the RISC core.


Well, I had assumed that people on a Win32ASM forum would have a better understanding of the pros and cons of the x86 processor, and be more up-to-date with Windows technology... and not be anti-.NET for some odd misinformed reasons.
Posted on 2003-11-29 13:41:31 by Bruce-li
If it has no quote or subject then it is a general observation.


The big-iron implication seemed like something I would have said, since I seem to be the only one defending the x86 here... So I thought you understood it as if I implied that... but only hutch-- ever mentioned it, as far as I know.

you seem hell bent on comparing the x86 and graphics cards on a benchmark basis


Not at all... As far as I can recall, the only benchmarks I have posted are the official SpecCPU2000 ones, and I thought these were generally accepted as the standard in CPU benchmarking anyway.
Other than that, I have mostly been posting real-world facts, which seem to have been ignored by most people, just as the questions I have repeated a few times, regarding WHY exactly x86/DirectX would be useless for anything but PCs and such.

For myself I use what I find does the best job, Mac for graphics, PC for business


Okay, and can you describe exactly what you find then? Why is the Mac better for graphics? What kind of graphics are we talking about, and what is 'business' then?
You have to give your statements some kind of meaning, else it's just useless fluff, just like hutch-- has been spouting for quite a while now.

By the way, that last statement was not aimed at you specifically, but there was some garbage about .NET from a few people... Also, what I don't understand... If you assume that people know about the RISC-core in x86 CPUs, including yourself, why do you still post some nonsense about how the RISC-core is a translator?
Posted on 2003-11-29 14:01:42 by Bruce-li
Big Iron is a name for IBM mainframe equipment.

And if you cannot quantify any of your statements, other than 'it feels better', why even bother mentioning them?
Heck, you can't even say WHAT KIND of graphics operations you are doing... what use is that then?
There are so many different areas in computer graphics, with so many different requirements, that it's useless to just say "This is better for graphics" anyway.
But as I said before, since Mac basically uses the same display cards as PCs do, and OpenGL to drive them, I doubt that you'd see a difference in the hardware-accelerated 3d department, it's just the same thing.
Although, it would not surprise me if the same hardware works better in x86 systems because the main focus of the companies is there... They have much more experience writing drivers for x86/Windows than for PPC/MacOS, so that could show...
Posted on 2003-11-29 14:23:51 by Bruce-li
The win32 API has a number of problems, that's true... they kept 16bit compatibility stuff, and it's obvious that different people have worked on different parts of the API - but at least the various subsystems are relatively consistent. Plus, the API can do most of what's needed by games and applications. I sure as hell wouldn't like being stuck with POSIX and X :-O
Posted on 2003-11-29 14:42:32 by f0dder
And yes, if you take the time to really study the API you can see yet another reason Microsoft has to eventually break from the current model, it is severely disjointed and at best sporatic.


Hacking away at APIs like that is easy...
However, try finding an API that is actually IN USE today, which is better...
Windows is not perfect, but it's not like there are any better alternatives, APIs just are like that in general.

My original posts to a thread about 64 bit Windows still stand.


Yes, getting back to 64 bit Windows... Personally, I'll just continue coding the way I did... C/C++ and inline asm.
The C/C++ code won't even need a change for 64 bit.
The inline asm needs to be rewritten, but if it's worth it, then I will.
I think people get confused by the whole .NET thing...
There's 64 bit Windows, which is just that... Windows, but on a 64 bit CPU.
And then there's .NET, which is just... .NET, and is available for 32 bit Windows aswell, so whether you use .NET or not, is not related to whether you use 32 or 64 bit Windows.
I will eventually swap to .NET with 'inline native code', or however you want to call it... the native DLL thing I described earlier... But I have no idea when that time comes.
Posted on 2003-11-29 14:45:21 by Bruce-li
No, sorry, it's built on *nix for crying out loud, and then there are all these GUI-interfaces... You have Carbon, Cocoa, X11... whee. Oh, and Objective C? Please...
Try again.
Posted on 2003-11-29 15:02:26 by Bruce-li
How many people are going to be caught out by "int" when 64 bit hits the shelves!
I guess we should all start naming variables properly (long, and long long :)), otherwise everything will go horribly wrong!

Apple have a big advantage in the neatness of API stakes, they can thow everything away and start again because their user base is small and loyal enough to do it to. Millions of PCs out there running windows, and they all suddenly stop running application X, Y, and Z! I whine when my old games I ran on my 486 don't run :D
Microsoft have to keep a much bigger legacy running otherwise its financial suicide.

Mirno
Posted on 2003-11-29 15:05:29 by Mirno
How many people are going to be caught out by "int" when 64 bit hits the shelves!
I guess we should all start naming variables properly (long, and long long ), otherwise everything will go horribly wrong!


MS already took care of that. Never used Visual Studio.NET? The compiler identifies any possible 64-bit issues in your code.

Apple have a big advantage in the neatness of API stakes, they can thow everything away and start again because their user base is small and loyal enough to do it to.


Perhaps they can, but they don't. In fact, I believe that current Macs can still run 68k applications through an emulator in the OS.
Posted on 2003-11-29 15:10:23 by Bruce-li
I do nigh on all my C under Linux, with a little on Sun boxes.
It needs to run along side HDL simulations (big ass 3 gig jobs at times), I'm just imagining linking to the wrong library and going mad trying to work it out (make files can be a pain to hunt through). "I am including foo, I've got the library right there!" and finally after clean building everything about 5 times going *smack forhead* doh!

On the apple emulation front, yes they do, and it's good of them, but the original for OSX.0 (or 10.0) it was supposedly sucky. I can't confirm or deny that, but there were complaints that OS9 versions of things like Photoshop fell over.

Mirno
Posted on 2003-11-29 15:30:42 by Mirno
I'm just imagining linking to the wrong library and going mad trying to work it out


I hope they'll do a smart thing, like putting a flag in the header of the libs, so the architecture can be determined easily, and such mixups are impossible :)

On the apple emulation front, yes they do, and it's good of them, but the original for OSX.0 (or 10.0) it was supposedly sucky. I can't confirm or deny that, but there were complaints that OS9 versions of things like Photoshop fell over.


Yea, that's one area of 'emulation'... OS X has to be backward-compatible with the 'classic' MacOS... But that was not the one I was talking about.
The one I was talking about comes from the early 90s, when Apple was forced to move from 68k to PowerPC CPUs. They included a 68k emulator for legacy support... and I think it is still in there now.

So all in all, I wouldn't use OSX as an example for a nice and clean system, it's about as much of a hackjob as Windows is, really :)
Posted on 2003-11-29 15:36:42 by Bruce-li
It seems that you can lead a dead horse to water but nothing will make it drink. :grin:

Our friend waded into this topic with assertions that anyone who did not see the primacy of x86 hardware in performance terms did not know what they were talking about when funny enough, there are a lot of people around writing x86 assembler who actually do.

What I posted way back in this thread was that x86 hardware has too many problems and is a long way off the pace in high speed image manipulation. I further commented that assuming an x86 architecture with Microsoft windows directX is a model for different hardware and software is a mistake and the performance is there on other hardware to prove it.

I would like to think I did something clever by making reference to Silicon Graphics hardware and OpenGL but in fact everyone and their dog already knows that SGI have been the leaders in high end image manipulation for many years. We all know the phenomenon of the currect x86 processor is more powerful than the mainframes of 20 years ago but that was the case when I had a leading edge 486 DX 33 in the early 90s.

Making comparisons to 5 year old SGI hardware and how x86 clustering can outperform it has the proclamation value of a gnat's fart. Try competing with that type hardware on a current basis and you find that x86 is simply out of its league.

Now on the other end, most people face with processing power on the scale of a high end SGI box would not know what to do with it. Perhaps they could point an optic fibre Cisco router at it and get the fastest downloads on the planet but its a scaling problem.

Where x86 hardware does perform is in the scale that domestic, personal and small enterprise users work in. For reasonably small amounts of money, you get fast games, good business application, passable number crunching, DVD playing, music and that whole host of other things that people use an x86 PC for and it is among the reason why its the market leader for this user range.

We all know that it has got better over time but its the world's worst kept secret that the architecture is ancient and not up to pace for demanding technical applications. Its history of endless fudges is well known and ther only real solution is to keep winding the wick up in clock frequency.

A PIV is an interesting beast, particularly when you actually bother to write and benchmark code between it and an earlier PIII. This is problematic when the assumption is based on cross platform libraries and the magic API that performs the same on all hardware. When you in fact do the benchmarking you will see what Intel have had to compromise to get some areas faster at the expense of others.

The preferred instruction set, MOV TEST ADD SUB etc ... runs at twice the speed for a given clock frequency but at the price of other instruction going further off the pace, LEA, shifts and rotates, string instructions etc ....

Floating point is losing and MMX is off the pace but the 128 bit XMM instruction are getting faster and all of these changes follow from boundaries in the absolute architecture of the x86 processor.

The only reason why anyone bothers with hardware of this design age is backwards compatibility and user base. If x86 held 2% of the market, no-one would bother with it but as long as it continues to hold the major market, it will be tweaked and appended to so a bit more performance can be extracted out of it.

When our friend learns this, he will stop making assertions about how leading edge x86 hardware is and how well it performs with the next "gee whizz" video card.

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-11-29 18:49:58 by hutch--
Let's try again...

The whole OpenGL vs. DirectX thing is for realtime graphics - the kind used in games and engineering apps. I never argued that DirectX should replace whatever rendering systems used in things like movie production (and you can bet they're not using OpenGL there). So, the context is hardware accelerated realtime rendering.

I don't know of any current APIs for this other than OpenGL and DirectX. Once there was glide, but this was limited to a single card, and it's long dead. Of the two remaining, I find DX superior to GL, because GL doesn't provide the same amount of control. Speed and image quality should be about the no matter what API you use, but GL coding requires vendor extensions to do anything interesting.

Now, in this context, would hutch please explain why GL is better than DX?
Posted on 2003-11-29 18:54:23 by f0dder
Our friend waded into this topic with assertions that anyone who did not see the primacy of x86 hardware in performance terms did not know what they were talking about


I don't recall anyone saying that, and this seems to be just some more nonsense from your end, trying to convince others that you are right, when in fact, you have nothing to back your statements up.

What I posted way back in this thread was that x86 hardware has too many problems and is a long way off the pace in high speed image manipulation. I further commented that assuming an x86 architecture with Microsoft windows directX is a model for different hardware and software is a mistake and the performance is there on other hardware to prove it.


Again, you have never backed this up. WHY is x86 off the pace in high speed image manipulation? What do you mean by that anyway? How do you explain that things like Autocad, Pro Engineer, 3D Studio MAX, and such are all available for x86, and are often used professionally?
How do you explain that Apple lost its advantage in PhotoShop against the latest breed in x86?
How do you explain the fact that Pixar just bought an x86 cluster?
So, we have a lot of facts that deny your statement... Are there ANY that back it up? You've not given one yet.

As for DirectX... Since it's not actually implemented on other hardware than regular x86, Itanium2 and G5 computers, it's rather hard to say anything about its performance on larger systems, isn't it?
So, we can only do this the theoretical way... That is why you should explain why exactly DirectX would not be suitable... Tell me what part of its design would prohibit it from performing well on other hardware?
And how does OpenGL relate to all this? Is this model also not suitable for this hardware? What is the difference exactly? In fact, what kind of hardware and software are you thinking about anyway, specifically?

I would like to think I did something clever by making reference to Silicon Graphics hardware and OpenGL


Well not really, since as I've said a few times before, their current hardware doesn't quite reach the level of standard PC 'game' hardware... We have 12 bit integer colour components where the PC stuff has 24 bit floating point, or more... And we have a 24 bit zbuffer where PCs have 32 bit these days. Also, the SGI workstations aren't very impressive, really, with their old MIPS CPUs running short of 1 GHz.
SGI is having some trouble keeping up, these days. In fact, your beloved Altrix has a rather sad 36 GB disk in it... Even my PC has more than that. Also, it is simply built on standard Itanium2 CPUs, no longer the specific optimized MIPS CPUs... And it runs linux... This all comes very close to generic x86-grade software and hardware.
Oh, and ofcourse then there is OpenGL which struggles to support all the latest features that DirectX 9 already supports... As I said earlier though, they already 'borrowed' ps2.0 in the form of the ARB2 fragment program extension... And Cg/glslang are just clones of the HLSL obviously... So how exactly are SGI hardware and OpenGL better again?

Try competing with that type hardware on a current basis and you find that x86 is simply out of its league.


In case you didn't get it yet, SGI barely moved in the past few years, compared to the x86 market. That's the whole point here. SGI was once THE supplier of graphics systems, but these days they offer either their outdated workstations, or rather generic 'supercomputers', which can be clustered... HP offers almost the exact same thing with their SuperDome. SGI is nothing special anymore, they slipped.

The only reason why anyone bothers with hardware of this design age is backwards compatibility and user base. If x86 held 2% of the market, no-one would bother with it but as long as it continues to hold the major market, it will be tweaked and appended to so a bit more performance can be extracted out of it.


What you don't seem to understand though, is that this strategy is actually working. Since x86 has such a large marketshare, it also receives an extreme amount of funding for R&D, compared to other CPUs... Think of this funding as the proverbial 'larger hammer'.
No other CPU comes anywhere near the 3.4 GHz that you can currently get from your local P4, which means that it can make up for a lot of the efficiency that the x86 inherently lacks.
Brute force may not be elegant, but it DOES work.

When our friend learns this, he will stop making assertions about how leading edge x86 hardware is and how well it performs with the next "gee whizz" video card.


Firstly, I never said x86 was leading edge. So stop putting words in my mouth, trying to convince others. It's a rather weak alternative for coming up with proper backup for your statements.
Secondly, the videocard is not related to x86 at all... Or DirectX for that matter.
Videocards work equally well with DirectX and OpenGL, and they also work equally well with any system that offers an AGP port... Be it an HP Itanium2 workstation, an Apple G5, or just a generic P4 system.

PS: I just dug up some news about the Cray Opteron: http://news.com.com/2100-1001-962787.html?part=wht
You DO classify Opteron as an x86, don't you? What will you say once the system is completed? "x86 is not suitable for the big stuff, like mainframes, supercomputers etc"? You'll have to come up with something better.
Posted on 2003-11-29 19:15:10 by Bruce-li

Let's try again...

The whole OpenGL vs. DirectX thing is for realtime graphics - the kind used in games and engineering apps. I never argued that DirectX should replace whatever rendering systems used in things like movie production (and you can bet they're not using OpenGL there). So, the context is hardware accelerated realtime rendering.

I don't know of any current APIs for this other than OpenGL and DirectX. Once there was glide, but this was limited to a single card, and it's long dead. Of the two remaining, I find DX superior to GL, because GL doesn't provide the same amount of control. Speed and image quality should be about the no matter what API you use, but GL coding requires vendor extensions to do anything interesting.

Now, in this context, would hutch please explain why GL is better than DX?


You are caught in the middle of an all out war, chances of your question being answered is slim.
Posted on 2003-11-29 22:12:50 by x86asm
Bruce post 1

So would you like to state some facts, or are you ready to admit that you don't know what you're talking about?

Ego massaging may be PHUN but you have delivered no more than waffle in the face of far superior hadware.

I pointed you to SGI's web site but you failed to read the benchmarks and continued to waffle on about clustering and whatever the latest crapheap video card may happen to be.

How may x86 processors will you have to strap together to get into the high end that SGI already control ?

Now I am pleased to see that you at least know the names of some of the applications that run on x86 hardware but it does not form a reference for your waffling about high end performance when its not there.

I used a model from the T model fords to make you a point. Its like comparing what you can do to hot up something that ancient so that it can compete with the propulsion system of a space shuttle.

HAY man (like wot horses eat), I have solid titanium rotary valve cylinder heads on my T model and a forged crank, con rods polished with the pubic hairs of red light area virgins and an output fan that will almost blow your hat off.

Now when the guy at the rocket propulsion lab get back up of the floor from laughing and explains how much thrust is required to get a shuttle off the ground, you pipe up with "Yeah but if I strap 10000 of them together it will be powerful enough to make a direct flight to Pluto."

After he get up off the ground from laughing again, he explains that the propulsion system for a Saturn 5 made back in the 60s could blow the entire collection of T model fords back into the scrapheap.

"Yeah but maybe if we strapped together 1000000 T model fords with after market bits it really could go to Alpha Centauri."

In effect this is what you are doing and the primacy of SGI hardware IS that well known that what I am hearing is still nonsense, no matter how much enthusiasm you may have for pissing around with this stuff.

This is in effect what you are doing and while you may feel comfortable in the reminiscences of 5 year old SGI stuff, a quick browse throught he press releases on their site will find the independently published benchmarks which will make you weep as you do the calculations to find out how many x86s you need to strap together to match it and then you have to work out how to deliver the shared memory access.

Sooner or later it will dawn on you that this stuff is out of your league.

Coleridge
"A sadder but a wiser man he woke the morrow morn."

Hint: Keep a box of tissues to wipe the tear stains from your face as you actually read the SGI benchmarks. :tongue:

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-11-29 23:57:30 by hutch--
f0dder,


Now, in this context, would hutch please explain why GL is better than DX?


In this context use directX, its designed for the x86 platform under Windows.

As you would remember, directX was originally written to coax the gaming brigade from 32 bit protected mode DOS to Windows so that support for Windows would increase.

Regards,

http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd

PS: A free eval version of SGI OpenGL for windows.

http://www.sgi.com/products/evaluation/windows_performer_3.0.2/
Posted on 2003-11-30 00:01:02 by hutch--
How many x86s strapped together did you say ?

SGI Altix 3000 Performance Lead Rolls on with LatestSpec Benchmark Results
http://www.sgi.com/newsroom/press_releases/2003/august/altix_benchmark.html

SGI Altix Achieves World Record Memory Bandwidth of 1 Terabyte per Second on Stream Triad Benchmark
http://www.sgi.com/newsroom/press_releases/2003/november/benchmark.html

Performance Benchmarks
http://www.sgi.com/servers/altix/benchmarks.html
Read the PDF file.

Now just be careful you don't produce an ocean of tears as you read this stuff, it may damage your keyboard.

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-11-30 00:46:46 by hutch--
I hate to do this, hutch--, but you leave me no alternative...
http://www.top500.org/list/2003/11/

The top 500 of supercomputers.
Now, if you will search for SGI, you will find that the highest ranked one is the SGI Altix at position 41.
Okay... So it's not the fastest supercomputer, you may be able to live with that...
But... if we look at the highest-placed x86-based computer... we find one at position 4!
Oh no!
There are plenty of x86-based systems in that list that easily outperform the Altix... Sure, the Altix only has 416 CPUs, but it's part cluster, since a single Altix only fits up to 64 CPUs.
You may need a lot more x86 CPUs, like for example the Xeon cluster at position 38, it needs 600 CPUs to beat the Altix' 416... Then again, 600 Xeons are cheaper than 416 Itanium2's. They are also less powerhungry, and they are smaller. So I think it's not too big of a problem to have a few CPUs more.

Anyway, you missed the point. I didn't ask you to produce some SGI marketing material, or regurgitate the marvels of clustered systems in general... I asked you to give specific technical arguments why x86/DirectX would not suit anything but PCs, as you keep claiming, but never actually manage to make plausible, since you do not support it with arguments.
I mean, no matter how good SGI systems are, as long as they don't use x86 or DirectX, their performance says nothing about x86 or DirectX, does it?

Sooner or later it will dawn on you that this stuff is out of your league


I am not hardware, so I am not in any kind of hardware league anyway. Speaking of out-of-one's-league, why do you refuse to give any technical arguments to support your statements? You simply don't have any?

Okay, it's time to give up the whole SGI-thing now, as you can see, in the supercomputer arena, the Altix performance is not impressive anyway, and besides, as I said before, it doesn't use x86 or DirectX, so it is not relevant to the discussion at hand...

Okay, let's ask again: What technical problems does x86 have, that make it not suitable for anything larger than PCs (except that it does quite well in the supercomputer arena?), and why would DirectX not suit larger systems, and why would OpenGL? Technical arguments please, this time.

In this context use directX, its designed for the x86 platform under Windows.

As you would remember, directX was originally written to coax the gaming brigade from 32 bit protected mode DOS to Windows so that support for Windows would increase.


Wrong answer. The first DirectX was designed for that, but the API has undergone many changes in the meantime...
Besides, with that same logic, we can say that OpenGL was designed for SGI workstations in 1989. With this difference that OpenGL never had any large changes, it only had a few features added a few times.
So OpenGL is still close to that original 1989 design... This should not fit current high-end systems very well, should it? Unless SGI never progressed much, that is.
Posted on 2003-11-30 05:13:35 by Bruce-li
LOL :) this is a fun thread, it's almost flamming, but still a bit informative. One question (which probably never will be answered), how do all of you have the time to write so much (feel like there is one more page for every time I sheck this thread).
Posted on 2003-11-30 07:54:58 by scientica
Posted on 2003-11-30 08:02:22 by S.T.A.S.