The only thing I will admit is that software has been developed in a linear way for a long time, and as multi-processor systems become more prevalent this will shift.


I'm willing to go as far as to say that if it would have made a significant difference, parallel solutions would have been developed, and multi-processors systems would already have been in use (we are not talking about simple companies that have a budget that would be too limited to invest in multi-processor systems for this task).
In virtually every area where multi-processor systems can make a significant difference, they are used. The same can be said for 64-bit systems I suppose.
Posted on 2004-05-25 10:36:41 by Scali
Scali, I'm not going to do anything in my life because obviously anything worth doing would have been done previously -- I mean the people that came before me were not stupid and if it was worth doing they most certainly would have done it! Oh, please - your failed logic shakles you to the ground.
Posted on 2004-05-25 11:15:32 by bitRAKE
Oh, please - your failed logic shakles you to the ground.


At least I know how to spell 'shackles'? :)
Anyway, my logic is not flawed, since extensive research has been done on parallel problem solving (hey my uni even has a course on it!), and multi-processor systems have been available for ages.
So a lot is known about this area, and unless companies such as Intel hire a bunch of unqualified idiots, their employees are also familiar with the possibilities in that area, and therefore would have experimented with parallel solutions, if they would have worked.
In fact, they may actually have tried such solutions already, but concluded that they didn't work.
But it would be foolish to think that nobody ever thought about the subject before you started a thread here about quad Opterons. Especially since this subject is too trivial for anyone to have overlooked it in the past... 20? years that we've had multi-CPU systems and software for validation.

My logic did not fail, your interpretation of it did, and I think this was deliberate (I did not at all say what you just 'paraphrased' above), which is rude. I guess it is because you are out of technical arguments.
Posted on 2004-05-25 11:23:04 by Scali
You digress, huh? :) Well, I'll move on.

I don't believe I said anything about "nobody ever thought about the subject before " - that wouldn't be a very intelligent disposition. I merely suggest more thinking is needed to progress in this area.
Posted on 2004-05-25 11:43:48 by bitRAKE
I merely suggest more thinking is needed to progress in this area.


And I suggested that budgets aren't a problem in such companies, and the people working there, are quite familiar with the possibilities in this area, so if it were just up to thinking to make progress in that area, the progress would have been made years ago. You do realize that Opterons aren't the first multi-CPU systems, don't you?
With all the knowledge and resources that companies such as Intel and AMD have, not to mention IBM or HP, don't you think that progress would have been made by now? Especially IBM and HP are interesting here. They build large-scale 64-bit multi-processor solutions themselves. Don't you think anyone in that company would have said "Hey these big machines that we make, why don't we use one at our integrated circuitry division, we can use it to validate our designs much quicker" if it made any sense?
What exactly is your suggestion based on?
Posted on 2004-05-25 12:15:57 by Scali
mmm, some-times even if there is the knowledge is not easy to ligate or merge it, for example for introduce the symbol "=" where taked a lot of time, I dont remember the story exactly but was amazing that a so simple symbol that we know actually have such a history (IIRC) and a lot of interesting problems was solved without that expression or symbol ;).

Then maybe actually they can't. Have money and power not mean that all is easy.


Have a nice day or night.

Even some times happend (at least to me) that in a description of a problem or a solution I am using or saying some important but cant see it, there are hiding things that are not easy to extract, even if you are using it, also some times happend that you give other answer and you cant see for what question is that answer.
Posted on 2004-05-25 12:26:31 by rea
The "=" is a different story. I suppose that nobody ever looked for it, because nobody had the need to.
Eventually, someone came up with it, and then everyone thought "Hum that is pretty useful actually".
But I am quite sure that people are constantly looking for ways to manufacture chips faster and cheaper, since a lot of money can be saved here. While bitrake seems to be under the impression that people never even considered the possibility. I think that is a bit too naive.
Posted on 2004-05-25 12:32:38 by Scali
Yes maybe nobody need it, but they use a 'hided' thing, where solved a lot of problems before and without the knowledge of this symbol or "equality thing", but I think is best have this 'property' unhided (I will like to see how in the past was solved the problems ;) ).

Also see that the posibilities are a wide range where they have a limit, but for follow a posibility you first need cup (hit it) or see it.

But maybe they fail (not all research give the result that they whant), also maybe not fail because is not posible, but because they start in other place or first need be solved other thing, I dont know how is that (I dont know how they guide is research projects and the objetives that they have), perhaps they are using some hide thing that can be used but have not knowledge about it (cant see it?;) ); or maybe they see it (unhide the thing), but dont understand it; or they see and understand it, but dont whant that the result was that :D. In my point of view they will continue finding (if the money is sufficient for backup the research or paths, not all are tacked in the moment) they will only stop his research when they or some smart people say that not more can be done and demostrate it mathematically or other way (also that is a good result in a research... ).

Have a nice day or night.
Posted on 2004-05-25 13:06:19 by rea
Originally posted by Scali


And how do you know it sucks at running PC software? Microsoft has not yet released a G5-version of VirtualPC, so how well it performs is anyone's guess.

Because Microsoft published a press release explaining the delay of the release of VirtualPC. I would suspect that if anyone can guess at how VirtualPC performs at this point, it would be Microsoft.




So yes, in some cases it is quite possible to beat a faster clocked PC with a lower clocked Mac.

Boy, you could step in for Steve Jobs some time :-). I think everyone around here will agree that for *some* case and *certain* benchmarks we can find something that runs faster on a Mac than a PC. It's always quite obvious that in *some* cases PIIIs outrun faster clocked PIVs.

Now do you want to use that fact to argue that PIII's are generally faster than PIVs or that Macs are generally faster than PCs?




I find it very rude.

Good. Because *YOU* are exceedingly rude. Glad to return the favor sometimes :-) As for the whole issue of the Endian Bit, yes, all G3/G4 chips supported this. And, AFAIK, all the 6xx chips ever used on the Mac platform, did too. This is how it was possible to run Windows natively on a PPC (which requires a little endian archtecture) back in the days when Microsoft still supported other CPUs for Windows.
Cheers,
Randy Hyde
Posted on 2004-05-25 13:13:08 by rhyde
Originally posted by Scali

You will at least have to point me in the right direction. That is, assuming you are not just making unfounded flames against Intel's multi-processor designs.

Scali - take your own advice sometime.

You constantly make posts as though you are an *expert* on everything in this thread when, in fact, you often haven't got a clue what you're talking about.


If you think that MMX/SSE has anything to do with RISC, then it's you who needs to do some research.

You criticize others for not backing up their claims, now it's your turn. Please convince us peons that the MMX/SSE instruction sets could not possibly have anything to do with RISC. It will be interesting to see your arguments.
Cheers,
Randy Hyde
Posted on 2004-05-25 13:20:24 by rhyde

While bitrake seems to be under the impression that people never even considered the possibility. I think that is a bit too naive.
Again, you digress. Never have I said such a thing.
Posted on 2004-05-25 13:28:36 by bitRAKE
when, in fact, you often haven't got a clue what you're talking about.


Care to back that up, or shall I just consider this an uncalled-for personal attack?

Please convince us peons that the MMX/SSE instruction sets could not possibly have anything to do with RISC. It will be interesting to see your arguments.


Well, the most simple answer would be that MMX and SSE are actually an extension of a CISC architecture. An instructionset cannot be partly RISC and partly CISC. It's either RISC or CISC. So in this case it is CISC.
More in-depth, one can say that memory operands, such allowed by MMX/SSE are not in line with RISC ideology, and the choice of only 2 operands instead of 3 is also rare in the RISC world, aswell as the low number of 8 registers.
Then if we look at the implementation of MMX, which shares the FPU registers, we can conclude that this is a very complex way of designing a part of an instructionset, which again goes against the RISC ideology.

To conclude, if bitrake were to say that SIMD would be a RISC-feature, I can simply say that x86 is not a RISC-ISA, yet it offers MMX/SSE, so apparently it can also be a CISC-feature. And ofcourse if you compare it to the alternatives offered by RISC-CPUs such as the PowerPC (AltiVec), we have to conclude that MMX/SSE/SSE2 are still based a lot on the x86-design, and not nearly as flexible and powerful as the alternatives, which were designed much more 'from a clean slate'.

Is that what you wanted to hear?
Posted on 2004-05-25 14:25:46 by Scali
Never have I said such a thing.


Okay, you only said they haven't considered it enough.
But then I would argue that if they considered it at all, and it would work, there would be some (primitive) parallel solutions available today (at least some drafts or prototypes or such). And from that I conclude that either they considered it and it didn't work, so they would have considered it enough, or, they have not considered it at all.
Posted on 2004-05-25 14:28:18 by Scali
Because Microsoft published a press release explaining the delay of the release of VirtualPC.


I doubt that the press-release contained anything in the direction of "performance sucks". Even if it did, it would be purely subjective and not be any kind of accurate indication. Fact remains that you have not seen it running, and you have no accurate info on its performance, so your claim of "it sucks" is unfounded.

As for the whole issue of the Endian Bit, yes, all G3/G4 chips supported this. And, AFAIK, all the 6xx chips ever used on the Mac platform, did too.


Yes, as I said, it was a Motorola-feature, and Apple used Motorola until the G5. Incidentally, my Matrox card also has dual-endian support. I wonder if that feature was there mainly for Macs.
Posted on 2004-05-25 14:34:29 by Scali

Okay, you only said they haven't considered it enough.
But then I would argue that if they considered it at all, and it would work, there would be some (primitive) parallel solutions available today (at least some drafts or prototypes or such). And from that I conclude that either they considered it and it didn't work, so they would have considered it enough, or, they have not considered it at all.
Scali, you are caught in a loop within your head: (reply). Why do you even goto school then - it has all been done before based on your logic, or maybe I am not reading correctly. There have been discoveries in linear processing in the last ten years which are amazing, and you are trying to tell me no such discoveries could be made in parallel processing?

Please, note the reinforcement that takes place on a cultural level to limit thinking to processing of a linear nature, and you should easily be able to understand why further advances are so difficult in this area.
Posted on 2004-05-25 15:24:47 by bitRAKE
or maybe I am not reading correctly.


Apparently not.

and you are trying to tell me no such discoveries could be made in parallel processing?


As I say, the tools for parallel processing are there. How do you explain the fact that they are not used?
Perhaps it can be proven that parallel processing has no gains even. I don't know. You can often prove a lot about a solution you don't have. Ever studied computational science/fundamental informatics?
Posted on 2004-05-25 15:41:54 by Scali
Scali, oh I understand you now. You are saying I need to go to an institution of higher learning or I can't possibly contribute.
Posted on 2004-05-25 15:50:16 by bitRAKE
Wrong again
Posted on 2004-05-25 15:51:27 by Scali
Well, then it is not clear at all.

It is not like I'm wandering around blindly in the dark. :)

I was not talking about tools for parallel processing, but development of algorithms for the parallel tools - which we know to be more efficient in terms of space and time and power consumption.

It is still being considered, so it has not been considered enough.
Posted on 2004-05-25 16:01:35 by bitRAKE
This has turned into a TRULY SERIOUS conversation. :tongue:
Posted on 2004-05-26 05:15:54 by hutch--