Bruce
Oh and ofcourse the Win32API won't just vanish into thin air when Longhorn is released. Longhorn will still have to be able to run legacy software, so the Win32API may be with us for a few years yet.


Ofcourse MS is not going to throw out it's massive software base, the API will be reduced to wrapper functions. Even Microsoft admits that LongHorn will not be accepted immediately, they expect developper acceptance around 2008, 2 years after it's release in mid 2006. They need some software to run until then.
Posted on 2003-11-28 10:26:24 by donkey
When you want to build an application you use a set of scripts and the engines execute the scripts.


I find your use of 'script' disturbing here.
To me, a script is a piece of source-code that is fed to an interpreter, like eg batch files, *nix shell scripts, or Javascript. It is very similar to regular programming, but there are some small differences... For example, since a script is interpreted at runtime, all variables can be allocated dynamically, and don't have to be defined explicitly.
Personally I find this the determining characteristic of scripts.

.NET doesn't use this kind of scripting for applications. Well, you can run JScript.NET actually, but that's not really what applications are about.
Applications are just written in a programming language, and then compiled to .NET bytecode. There are compilers for many languages, including VB, C++, C#, Java, Fortran, COBOL and Delphi, to name but a few popular ones.
If you were a regular high-level programmer, you'd barely notice a thing... The compiler is just 'cut in half'... The .NET bytecode is very similar to the bytecode that virtually all compilers use these days, after the parsing stage, and before the native code generation phase, and the JIT-compiler is very similar to the backend of compilers, which generate native code from the bytecode.
Assembly is the exception here... The VM-system makes assembly obsolete. You should not even think of writing bytecode by hand, since this code is merely a compact representation of the abstract syntax tree, after some high-level optimizations. There should be nothing that you could improve manually, there are no special instructions that compilers don't use... Loop unrolling or scheduling is useless, because that is done by the JIT-compiler anyway, and there is no direct relation between the bytecode and the native code that will be generated from it, so it's hard to predict if and how the output changes, if you alter the bytecode manually.

However, you will still be able to access native code stored in DLLs. And you can ofcourse use assembly to build such DLLs. That's how you should use .NET...
You use the hardware-independent stuff wherever you can, and if you need that extra performance in cases where the JIT-compiler doesn't deliver for some reason, you move that part of the code to a native DLL, and optimize it the old-fashion way.

It is really not that different from the common C/C++-with-inline-assembly system that we're used to.
Posted on 2003-11-28 11:24:44 by Bruce-li
My defnintion of a scripting language is one that is not compiled into machine code but run in a JIT engine. For example JAVA is to me a scripting language as it is run from it's source code. There is no facility to run C or C++ source code except when using a utility like CScript which turns it into a scripting language hence the "Script" suffix. A script is exactly what it sounds like, a group of instructions directing the JIT engine that in turn actually executes a task. As I had said in a previous post but neglected to mention in each subsequent post for brevity, asm in Windows is little more than a scripting language for routines written in C but at least we can call those functions directly, there is no need to translate to XUML or another language to request that the engine execute the function.

I am not used to any in-line assembly, I don't program in anything but assembler but for me it is little more than a hobby. As I had also said in a previous post, for all the hooplaw and whining, those that rely on their programming for a living will have to adapt or die.
Posted on 2003-11-28 11:36:28 by donkey
For example JAVA is to me a scripting language as it is run from it's source code.


But it's not. JavaScript is run from its sourcecode, Java is compiled to bytecode, and that's where it's run from.

A script is exactly what it sounds like, a group of instructions directing the JIT engine that in turn actually executes a task.


You make it sound like you think the instructions get interpreted by the JIT 'engine' all the time...
You do realize that each function is compiled once to native code, and after that the code acts just like native code?

there is no need to translate to XUML or another language to request that the engine execute the function.


With .NET you can still call functions directly, either native Win32API or other imported DLLs, or the .NET libraries (which are also partly implemented in native DLLs).
I think you are confusing scripts, sourcecodes, compilers and interpreters.
Perhaps this is a good time to download the .NET framework SDK, and check it out a bit?

I am not used to any in-line assembly, I don't program in anything but assembler but for me it is little more than a hobby. As I had also said in a previous post, for all the hooplaw and whining, those that rely on their programming for a living will have to adapt or die.


Perhaps this is a good time to download the .NET framework SDK, and check it out a bit?
Posted on 2003-11-28 12:05:54 by Bruce-li

My defnintion of a scripting language is one that is not compiled into machine code but run in a JIT engine. For example JAVA is to me a scripting language as it is run from it's source code.

Actually it's almost true (with modification), java files (plain text source files) are translated to byte code (.java -> .class) which is interprented - so in a way it runs from it's source.
to me a script is a _plain text_ file (that why I consider java not to be an script lanugage but something just bellow it) with an set of instructions that is interprented by an interprenter (which calls functions based on the strings it finds, converts numbers to numbers, text to strings, function names to calls). *nix shell scripts are scripts, .bat, VBS, JavaScript (and M$ weird JScript (which != JavaScript or (JavaScript-stuff+other_stuff) )) too.

IMO java is the only JIT language one needs, and it's for the web. I don't really fancy the idea of having "regular" apps writen in java, but they exist (one question (/non-subliminal question - really I don't know/) who was first with "JIT-exes" M$ (.net) or Sun (Java Web Start, .jnlp)) - but it's not just java that can be translated to exes, even old (q)basic files could be 'translated' to exes using some app and python scripts.
Programming has just stopped to be something which only the L33T knew, now days even a monkey can program (in VB) with out knowing what an opcode is at all - where has this cruel world gone? :( _ _ _ _ _ _ ;)
Posted on 2003-11-28 12:06:38 by scientica
I cannot download .NET except on my dev PC, the company I work for has barred it from PCs we work on (I work on my home PC as well) because of security and lisensing concerns. But I am not talking about .NET here, that framework is designed for existing Windows OSes, I am however talking about LongHorn and .NET being just a preview of the new direction MS wishes to take. Whether that vision is fully realized in LongHorn only time will tell, it is the objective that counts and Microsoft appears to be at odds with your interpretation from the various articles I have read. I have not said that the engine driven OS is unworkable or a bad thing though I have serious reservations concerning execution speed, the reason I am probably giving up assembly and programming in general is that the company I work for is dumping all MS products and OSes in favor of Linux mid-2004 and I will be forced to go along. The advent of TCPA and the original concepts behind .NET (which were quickly dropped and never mentioned again) like applications being run from an internet server, and new lisense verifcation techniques giving unprecedented access to your machine at a hardware level have fueled the decision for my company, not the OS itself. ISD was directed to find an alternative a couple of years ago because of the stated intentions of MS, if those have changed slightly since then, well they should have kept their mouths shut instead of inducing paranoia in the companies with sensitive information that we are legally required to protect.
Posted on 2003-11-28 12:21:01 by donkey
Hi!!!
I've just browsed through this discussion. I would like to think that asm-programming will always be in demand for
programming the cutting edge, on any x-bit system... Except maybe for some Isac Asimov-future where robotic-brains will outsmart humans. :P

But, I read this stuff about the next windows system you are discussing... I did not know anything about this, and I was a bit shocked
about all this about not letting programs run in asm-mode ( no compiled code if I understood you right ).

If this is their intent... it seems completely uncomprehendable to me.
What would Microsofts advantage be in this? What advantage over other OS's???
Can somebody explain this to me?

Of course Computers will be multi-supra faster than today in 4-8 years, but... but would not this scripting-based os be a lot slower
than other os that still used compiled code, asm?
If other OS's like linux and whatever would still be like normal and Microsoft launch their new 64bit slow-running scripting os.... would not many people abandon it for other platforms. Microsoft doesn't like loose money do they?

All people that use computers for speed would abandon it, professionals in the fields of image-manipulation, 3d-animation, dsp, whatever you could think of that has to use the highest speeds available... not only that but I'm also thinking about a very much bigger community
the GAMERS!!!. They are a big lot, and speed-obsessed, and they would switch immediately to other OS.
These people would not be fooled by any slogans or brainwashing commersals.

It seems absurd that they would not allow asm-code... can anybody explain this, and the motive to me...? It just seem too weird to me.

( Or did I miss the explanation somewhere in this humungous thread :P )
Posted on 2003-11-28 15:41:14 by david
Bruce,

If I understand you last posting, you have shifted from supporting the virtues of x86 hardware to compete with high end hardware to clustering to make up the processor grunt difference.

Along with this you have argued that modern video cards can have a large amount if image manipulation built in so that the processor can delegate processing to the video card which performs tasks of that type faster than the main processor.

Obviously you have a reasonable amount of dedicated knowledge on a particular type of system construction and while I don't know if its directly electronic or higher level logic based, the problem with the highly specific nature of your questions and information requests is that there is a far larger world out there where thing have been done in many different ways so while you would know a reasonable amount of specific information, I have yet to hear an argument from you to apply it on a general level across a multitude of different dedicated tasks.

You appear to be confusing the "bang for your buck" notion with high end performance but I suggest to you that the simplicity of a single x86 PC with a smart video card is being lost with what you are suggesting with clustering to get the processor grunt to do more complex tasks.

I know guys who were working in the defence department who were clustering VAX hardware 7 or 8 years ago so the idea is by no means new or original. When I bought the last PC which ended up being built around a PIV, it was because the price and availability of PIII processors had become a problem as they were only being produced for multiprocessor boards so they were expensive and hard to get.

A PIV is a better processor in some aspects, particularly with the preferred instruction set running twice as fast for a given clock frequency but at the time, the PIV was not capable of running in multiprocessor systems. Now there are ways around this but the interface between the processors and the scheduling starts to become a lot slower.

Some years ago there was a system working that asked users to dedicate their unused processor time while surfing the internet. basically the idea of distributed processing using a very large number of computers and while the internet and TCP/IP is a very slow way to perform the distribution, it is the same fundamental logic as processor clustering.

Now examples of legacy hardware demonstrate the point of fundamental processor differences. How many DX33 486s would it take to emulate the processor grunt of a 400 PII. Correspondingly, how many 400 PIIs would it take to emulate the grunt of a 1.3 gig PIII.

This is the basic assumption you are making, if you strap together enough current PIVs, you can emulate far more powerful hardware. This is a variation of the incremental nature of x86 improvements over time, get the current processor, strap enough of them together and you can make up the quantum leap to later designed hardware.

Somewhere along the line, the cheap and cheerful nature of x86 hardware is being lost on massive complexity when the problem remains that it is an ancient architecture that has only ever been tweaked around the edges to try and overcome some of its fundamental problems of being too old to do the high end stuff.

Keep in mind the time scale involved from 8 to 16 bit buses, the introduction of PCI to get higher throughput, multi-bus speed AGP to get the graphics output faster and later video cards with inbuilt process delegation. Hard disk technology is powers larger and affordable memory is a lot fatser and cheaper but it still suffers the problem of an architecture set out 25 years ago.

Yes you can extract bits and pieces to make them go a bit faster but to match current technology, you have far too much baggage to comply with to be on the pace. What I suggested in the first place was that x86 hardware and directX are not a model for other technically different hardware as there are better ways of doing these things that don't hold the same set of assumptions.

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-11-28 17:27:52 by hutch--
If I understand you last posting, you have shifted from supporting the virtues of x86 hardware to compete with high end hardware to clustering to make up the processor grunt difference.


Not really. To me, an x86 is just another processor, just another resource of computational power.
Like all others it has its strong points and its weaknesses, so I didn't really see why it was being hacked at specifically. Obviously its strongest points are that it's cheap and common. Its weakest points are floating point performance, but it's not as dramatic as you want to make it sound.

I have yet to hear an argument from you to apply it on a general level across a multitude of different dedicated tasks.


What exactly is this supposed to mean?

You appear to be confusing the "bang for your buck" notion with high end performance but I suggest to you that the simplicity of a single x86 PC with a smart video card is being lost with what you are suggesting with clustering to get the processor grunt to do more complex tasks.


You appear to be confusing different rendering methods and different hardware requirements...
If you are after realtime triangle rasterization, a single PC with a recent videocard will do fine.
If you are after a renderfarm the likes of Pixar, for raytraced software-rendered movies, the videocard is useless, but the x86 itself is so cheap that it's hard to pass up, when you need massive amounts of processing power. While this may not be an elegant solution to you, I can tell you that elegance is never very high on the list of commercial companies. Things like cost-effectiveness, efficiency and availability seem to score much better, and here the x86 certainly does well.

strap enough of them together and you can make up the quantum leap to later designed hardware.


Well, let's see how large that quantum leap really is then...

http://www.spec.org/cpu2000/results/cpu2000.html

Well, I must say, there are some pretty decent scores from x86 in there, don't you think? Hardly a quantum leap to the fastest systems... And with the price-advantage that x86 has, need I say more?

but it still suffers the problem of an architecture set out 25 years ago.


Yes, you've made statements like these before... The problem is that you never back them up. What kind of problems are we talking about here, exactly, besides from some purely aesthetic ones perhaps?

What I suggested in the first place was that x86 hardware and directX are not a model for other technically different hardware as there are better ways of doing these things that don't hold the same set of assumptions.


Again, would you care to elaborate on what kind of assumptions you are aiming at here?
You are not going to convince me by just repeating the same statements all over again, without backing them up with information to make them probable.

The 'fluff' that you constantly add, like saying clustering is nothing new, or that there are distributed programs running via the internet, is not going to help either. I'm sure everyone knows clustering wasn't new, and nobody claimed that it was anyway. And I think it's reasonably safe to assume that most people have been exposed to seti@home or similar projects here, or are perhaps even actively participating.
Posted on 2003-11-28 17:53:56 by Bruce-li

Hi!!!
I've just browsed through this discussion. I would like to think that asm-programming will always be in demand for
programming the cutting edge, on any x-bit system... Except maybe for some Isac Asimov-future where robotic-brains will outsmart humans. :P

But, I read this stuff about the next windows system you are discussing... I did not know anything about this, and I was a bit shocked
about all this about not letting programs run in asm-mode ( no compiled code if I understood you right ).

If this is their intent... it seems completely uncomprehendable to me.

What would Microsofts advantage be in this? What advantage over other OS's???
Can somebody explain this to me?

Of course Computers will be multi-supra faster than today in 4-8 years, but... but would not this scripting-based os be a lot slower
than other os that still used compiled code, asm?
If other OS's like linux and whatever would still be like normal and Microsoft launch their new 64bit slow-running scripting os.... would not many people abandon it for other platforms. Microsoft doesn't like loose money do they?

All people that use computers for speed would abandon it, professionals in the fields of image-manipulation, 3d-animation, dsp, whatever you could think of that has to use the highest speeds available... not only that but I'm also thinking about a very much bigger community
the GAMERS!!!. They are a big lot, and speed-obsessed, and they would switch immediately to other OS.
These people would not be fooled by any slogans or brainwashing commersals.

It seems absurd that they would not allow asm-code... can anybody explain this, and the motive to me...? It just seem too weird to me.

( Or did I miss the explanation somewhere in this humungous thread :P )


Well, first off Microsoft does nothing in a vacuum, everything that is developped in Redmond is geared towards their vision of the future of computers. This is a very good corporate policy and has kept MS at the top of the software world.

Gamers have the X-Box, that is the preferred platform for games at Microoft and where they want to concentrate those efforts. Since games represent most of the pirating in the world the X-Box provides them with a slightly more secure platform for game developpers. Not that they will drop support for game specific parts of the OS but those aspects of the OS may suffer in the new structure.

The PC is their preferred platform for internet and productivity software but they are increasingly boxed in by their dependance on Intel x86 technology. This is not a commentary on the advantages or lack thereof of that technology there are plenty of supposed experts here to wax poetic on that. This is just that the Windows desktop cannot take advantage of any new advances in CPU technology without a massive rewrite. That is were an engine based OS comes in, by having the actual OS independant of the platform they have only to rewrite the base services and the engines in order to add support for another platform. This can be best seen in the comments by MS at comdex where they were touting LongHorn, they discussed to no end the fact that the new OS was going to be portable with little change between different embedded systems. So the short answer is that this strategy allows them to more easily leaverage their current technology and software in a non-x86 environment and offer an alternative to Linux on all platforms.

Speed has never really been an issue with Microsoft features are, I have yet to find an application that runs faster on XP than on 2K. Even if I double the memory on the XP system it still runs slower.
Posted on 2003-11-28 20:06:52 by donkey
Watching the shift from plugging the voracity of x86 hardware to generalising about it being "just another processor" and then if you cluster enough of them you can get high end processing grunt, I don't see the success of the architecture model being used by other hardware systems in conjunction with x86 based architecture in directX.

You seem to have wandered far from your original assertions about the virtue of both in the face of many different postings by different people. Does someone running an Apple G5 need Microsoft DX ? Do mainframe programmers need to hear about how toothless an x86 is in the face of big iron ? Have you wondered why mainframes are still useful on big web site that take thousands of connections an hour where a PC is just not big enough ?

If you need to find out comparitive tests between different video manufacturers, look up their web site or buy the manuals but as usual you will find that the flavour of the day is tomorrows leftover so the gee whizz aspect of a particular video card is really no big deal.

What has characterised you argument is a lack of overview, it seems that working in a narrow area has left you blinkered to other things like alternative hardware but in ther face of a number of people posting examples of superior hardware and the limitations of the gaming style of development for serious applications, you seem to come back to the polygon construction rate without realising that its not the only way to do things.

Roughly the distinction you are failing to get is that between "particular" and "general". I have no doubt you have a background in very particular hardware, either at the electronic design level or the assembly of existing hardware to a particular scheme of logic but this does not produce the overview, this is why I don't take you seriously as the overview entails the large wide world out there, not the range of technical experience you are indicating.

I don't really see that there is anything left to discuss, the only interesting stuff has been posted by other people and what I have continued to hear is waffle about the gee whizz aspect of a few video cards and clustering to make up for the lack of processor grunt and this does not support a universal model of x86 hadware and directX.

Yawn.

PS: I did have a look at the SGI site and surprise surprise, I don't hear x86 and directX anywhere when it comes to performance. You can buy a SGI? Altix? 3000 system starting at only 117 grand US that uses clustered Itanium2 processors and runs linux. I wonder how you can emulate their shared memory technology with a bundle of x86 processors ?

SGI Altix Achieves World Record Memory Bandwidth of 1 Terabyte per Second on Stream Triad Benchmark

I can't be bothered pasting in much more, just look and weep while you calculate how many x86 processors you need to strap together to best this type of architecture. :tongue:
Posted on 2003-11-28 20:52:25 by hutch--
Thanks for your thoughts Donkey.

Since games represent most of the pirating in the world the X-Box provides them with a slightly more secure platform for game developpers

Anybody can download the x-box games, burn and play with a chipped x-box man. Thats yesterdays news. As soon there is a new platform,
_any_ cracker group will assault it. The fact is that whatever the protection, be it hardware of software, _anybody_ with access to the internet
will be able to have software for free.
Also the gaming community are on the pc, ( fps games on console---> consoles are just for children )

I just wanna know if they really would throw away compiled code and asm.... that will be suicide in my view on a future os. If Future Windows would be script-only, they would be biggest on the market with office stuff, but on gaming and all speed-stuff, I am sorry... * sigh * out of leage.
Well just imagining how it would be if I had tried to complete my music producing job this week if Cubase SX, including the plugins,
would have been written in java...
As for a fact, I know that the specific plugins I use when making music in Cubase, has been optimized with hand-written assembler,
3dnow and sse. If they hadn't been, I could impossibly run them with most peoples systems....

Did I make crazy points???!!! Let me know.l
Posted on 2003-11-28 23:51:24 by david
I did say a "slightly" more secure platform. But that does not diminish the fact that MS wants to offload the games market to X-Box. If all you're interested in is games it makes no sense to buy a massively expensive PC when an X-Box performs very well and is only a couple of hundred bucks. There will always be games played on the PC but the high quality fast ones will be better placed on the dedicated platform.

For a good example of what I mean about the advantages of an engine driven OS you can look at MMX, when it was introduced there was a large rewrite of 95 giving us 98, nearly two years of porting. That rewrite did not include the GDI, not because there was no way to do it but because the extra speed did not warrant the amount of code that needed rewriting. To this day the GDI makes almost no use of even the simplest hardware acceleration. In an engine driven system, you have only to make the engine capable of using the new feature then all functions (ie the whole OS) becomes enabled at the same time, this type of advantage is noticeable even within the same family of processors.
Posted on 2003-11-29 03:18:06 by donkey
Speed has never really been an issue with Microsoft features are, I have yet to find an application that runs faster on XP than on 2K. Even if I double the memory on the XP system it still runs slower.


Well, if you look here, you see another conclusion:
http://www.windowsadvantage.com/features/11-19-01_xp_fastest.asp
http://pcbuyersguide.com/software/system/WinXP_benchmarks.html

Interesting, isn't it?

Watching the shift from plugging the voracity of x86 hardware to generalising about it being "just another processor" and then if you cluster enough of them you can get high end processing grunt, I don't see the success of the architecture model being used by other hardware systems in conjunction with x86 based architecture in directX.


You must be confused... You slagged x86 off, I pointed out its usefulness. You slagged DirectX off, I asked to give specific examples.
If anyone shifted, it was you, because there were facts that opposed what you said, and you never clarified anything.

Does someone running an Apple G5 need Microsoft DX ?


Apparently, yes. MacSoft has developed DirectX for Mac, so they can port the latest PC games.

Have you wondered why mainframes are still useful on big web site that take thousands of connections an hour where a PC is just not big enough ?


Would you consider a site like google.com a big one? Because as you know, it runs on x86 boxes. There are plenty of examples of big websites running on x86, because x86 clusters are very cheap and very effective as webserver clusters.

Now, the rest of your post is again fluff, hutch--...
I am tired of this... It is apparent that you simply can't back up your statements. I have shown you plenty of things on x86 that you deemed impossible, such as Pixar and the Ranger terrain engine...
I have shown you that there are SpecINT/FP results on the latest x86 CPUs that are VERY respectable (in fact, here's a better page http://www.aceshardware.com/SPECmine/top.jsp)... x86 might not be the fastest CPUs around, but only one can be the fastest. That doesn't mean that the rest is useless. So basically, your anti-x86 arguments have fallen through, and you have been unable to find any new argument.

As for the DirectX-thing... You have never given any arguments at all... You just stated that it was not suitable and that's that.

So stop acting smug and start backing up your statements, or be man and admit that you can't back them up, and you were wrong.
You don't see that there's anything left to discuss? You didn't even BEGIN to discuss. You just talked unrelated nonsense and tried some patronizing.
If anyone should be doing the patronizing, it should be me, since you can't back your statements up... Heck do you even know what you are talking about? What do you know about DirectX's internals anyway? Or OpenGL for that matter?

So shape up or give up. Nobody is going to believe your nonsense.
To this day the GDI makes almost no use of even the simplest hardware acceleration.


Are you crazy? There's LOTS of acceleration in GDI. Has been since Windows 3.0 at least. Hardware linedrawing, rectangles, blit, colourdepth conversion, etc, etc. Please inform yourself before you speak.

PS: Have you found any info yet to back up your claims of those super-70s emulators and those super-90s arcade machines, hutch--? Or have you given up those claims already?
Posted on 2003-11-29 05:29:06 by Bruce-li
I wonder why you bother when you are short on the hardware, argument and fact. I guess you have missed the I don't give a flying PHUK about the gee whizz aspect of junky PC video cards. Like normal, when you build another box, you buy what works well and will remain compatible for the box life.

Same old assertions in the absence of evidence, since you don't seem to read so good I will paste it back in for you.

SGI? Altix? 3000

A steal at 117 grand US and the worlds leading graphics box with performance that is out of your league.

http://www.sgi.com/

Weep as you read the specs and calculate how many x86 processors you need to emulate this level of performance.

Cheap and cheerful, yes but high performance no, same old story, cash talks and bullshit walks and all you need to hit the big time in graphics manipulation is the bux$ to do it.

i wonder why they don't use directX on their worlds fastest graphics box, is it that OpenGL just outperforms it or is it the unix OS that was not written by Microsoft ?

Do I take what you say seriously ? Have you presented the performance that you want to talk about in image manipulation ? Have you been misled by all of your own waffle about pixel popping, polygon construction rates and other trivia when there is hardware and software that performs in a different league to what you have mentioned ?

Who cares. :tongue:

Regards,
http://www.asmcommunity.net/board/cryptmail.php?tauntspiders=in.your.face@nomail.for.you&id=2f46ed9f24413347f14439b64bdc03fd
Posted on 2003-11-29 09:14:01 by hutch--
How this topic has drifted ^_^

The whole OpenGL vs. DirectX thing is for realtime graphics - the kind used in games and engineering apps. I never argued that DirectX should replace whatever rendering systems used in things like movie production (and you can bet they're not using OpenGL there). So, the context is hardware accelerated realtime rendering.

I don't know of any current APIs for this other than OpenGL and DirectX. Once there was glide, but this was limited to a single card, and it's long dead. Of the two remaining, I find DX superior to GL, because GL doesn't provide the same amount of control. Speed and image quality should be about the no matter what API you use, but GL coding requires vendor extensions to do anything interesting.

Now, in this context, would hutch please explain why GL is better than DX?
Posted on 2003-11-29 11:19:52 by f0dder
SGI? Altix? 3000

A steal at 117 grand US and the worlds leading graphics box with performance that is out of your league.


Pardon me, but SGI saying that SGI is the world's leading graphics box is not entirely convincing to me.
Did you know that Apple/Pixar claim pretty much the same, except they say G5 where SGI says Altrix?

Also, if this is the leading graphics box, and it's so cheap, why is eg Pixar not using it? I think Pixar is one of the leading computer graphics animation companies?

i wonder why they don't use directX on their worlds fastest graphics box, is it that OpenGL just outperforms it or is it the unix OS that was not written by Microsoft ?


SGI not using OpenGL on their systems is like Microsoft not using DirectX on their systems.
Besides, the fact that not everyone uses DirectX is not a proof that it couldn't work for everyone, the same also goes for OpenGL... In fact, a lot of software can actually run on both DirectX and OpenGL, as I said before, the differences are marginal anyway... Lately OpenGL has even been borrowing from DirectX technology... Such as the ARB2 fragment program extensions, they are remarkably similar to the ps2.0 shaders that Microsoft introduced in Direct3D 9 earlier. Ironically enough, the SGI workstations do NOT support these extensions, their hardware is not up to floating point pixel processing yet...
Need I say more?

when there is hardware and software that performs in a different league to what you have mentioned ?


Is that so? SpecCPU2000 doesn't seem to support this statement. SpecCPU2000 says "x86 has the fastest integer performance, and third-fastest floating point performance". That's a very decent average, so the whole "league" thing is highly exaggerated.

The evidence is still against you... This Altrix does not convince me... Even if it were the fastest graphics system, it still says nothing about its performance in general. I'm sure that Google's x86 cluster is much better at websearches than this one, for example.
And you still fail to back up all the other claims aswell.
Posted on 2003-11-29 11:48:42 by Bruce-li
whiich you seem to have failed to understand


Talking about failure of understanding.
I wanted to point out that XP is indeed faster in some cases.
The truth is somewhere in the middle ofcourse.
XP and 2k are considerably different beasts. XP is not just an improved 2k. Part of it is, you notice that in terms of memory management and hyperthreading, for example.
But on the other hand, you also have a different GUI subsystem (one that supports skinning, which ofcourse introduces extra overhead), and other services and things.
Sometimes it works out better, sometimes it works out worse...
In general it's not a dramatic difference though.
Posted on 2003-11-29 12:18:39 by Bruce-li
The fact is that the so called RISC core of the Pentium family is bogus, it is a translator that translates CISC instructions for execution in a RISC environment.


You probably didn't get it then.
Firstly, it's not 'The Pentium family', since the original Pentium doesn't really use a RISC-core yet. Pentium Pro does, and this same core is also used in the PII, PIII models and their Celeron and Xeon cousins.
P4 has a new core, which works according to the same principles, but it's more advanced.

There is a translator that translates CISC instructions, obviously... How else would it be able to handle x86 code?
The RISC-core is the backend... The translator translates to so-called micro-operations, which are then dispatched to the backend... And these micro-ops are basically a RISC-like instructionset.

but to imply that they can hold their own in the world of big iron is ludicrous.


Excuse me, but I don't recall myself claiming this. You must have mistaken hutch's statements for mine.

But you don't use a screwdriver to hammer in a nail and you don't use a Pentium in a place where high throughput and multitasking is needed.


I believe I had already covered the dedicated system vs generic system issue.

The fact is that according to many the x86 archetecture has fallen hopelessly behind the real RISC processors.


If you knew me any better you'd know that I'd be the last to deny this. However, that is architecture, which boils down to aesthetics... people don't like an architecture that started almost 3 decades ago, and has been patched up over time. Ofcourse, we all like shiny new cars better than rusty old ones aswell...
Then again, cars are not just things we look at... We need to use them too, and if the rusty old one can perform about as well as the shiny new one, but costs only a fraction, why shouldn't we use it? Just because it doesn't look that good on the outside?

In the end I can see the difference as I have both a Mac with a PPC and a P4 and frankly the Mac kicks it ass in graphics and most other image related things. Though the PC does easily perform better in business applications, I don't ever use my PC or Mac for games so I have no knowledge of how each performs in the kiddy areas.


Well, that's more or less the same story as with the clusters vs the supercomputers for specific tasks. By the way, could you get into more depth about how the Mac is better at graphics? Since Macs use OpenGL and Radeon/GeForce GPUs on AGP cards, just like PCs, I don't see how that part can differ... You must mean some specific area, such as perhaps raytracing, where you can give the AltiVec unit a good workout?

Different systems are good at different things. And ofcourse, just because one system is better at something, doesn't mean that the other system is automatically worthless.

If you just need a lot of horsepower, x86 clusters work fine. Then again, clusters (not just x86-ones, but in general) have their limitations, because they are connected via a relatively slow network. With supercomputers, the CPUs and memory are linked via direct high-bandwidth bridges.
Supercomputers often have less raw power than clusters, but they have the advantage of faster memory access... Which one is better, depends on the task at hand. This is not relevant to our discussion however... Since if you can use a cluster, for example for distributed rendering, such as Pixar does, then building this cluster from x86-CPUs is a good solution.
If you cannot use a cluster, the whole x86-discussion is rather useless, since there are no x86-based supercomputers... Although I believe that Cray is working on an Opteron-based solution.
We'll have to wait for that one before we can judge whether x86 can cut it in the supercomputer world or not.
I think some people just got confused between different types of systems, and their applications.
In short, we spoke of 4 classes:
1) Supercomputers, no x86-subjects around, so not much to say about them, is there. One could argue that since there aren't any x86-based supercomputers, x86 is not usable for them... Then again, if Cray indeed gets an Opteron supercomputer out, all that changes, doesn't it?

2) Clusters, x86 clusters are quite popular for websites, including the mighty Google... They are also great for distributed software rendering, and we've seen that the mighty Pixar has upgraded to an x86 cluster aswell.

3) Workstations, and x86 CPUs are quite close in terms of performance to other popular workstation CPUs such as the Itanium2. Add to that the fact that they both use the same type of AGP cards with the same type of graphics hardware on them, and the fact that x86 workstations only cost a fraction, we can conclude that x86 has a place here aswell, and we use them at our university for Autocad and Pro-Engineer, to name a few popular professional graphics workstation programs.

4) Regular office/home PCs. These days, they pretty much use the same hardware as the workstations... There may be less cache in the x86 CPU than in the workstation/server version (Xeon/Opteron), but still, a lot of the performance is there... And the 'game' version of the cards may have less memory or slightly lower clockspeeds, but they are also the same architecture, and a lot of the performance is there...
This means that the simple PCs of today can overshadow the workstations of yesterday... And since the workstations of yesterday were powerful enough to do a lot of professional work on them, this means that this also holds for the simple PCs of today. This is progress.
Want to see what I mean? Go to www.realstorm.com and download the realtime raytracing benchmark, and run it on your simple PC... You will get a reasonable framerate out of this... Now, compare this to the times when you had to use a Cray supercomputer, and have it work for hours just for generating a single raytraced sphere-on-plane style image.
I think this is what hutch-- fails to see.
Posted on 2003-11-29 13:16:32 by Bruce-li
Microsoft was not concerned about the speed decrease, only the features. I think that your comment here makes my point well, thank you.


I think it doesn't, since I mentioned a few things where Microsoft indeed made a speed increase. There's more to XP than just a fancy new skinnable GUI you know. Shame that most people can't seem to get past the exterior of an OS.
Posted on 2003-11-29 13:18:30 by Bruce-li