When everybody watch TV he can listen a lot of information
about "supercomputers" that can compute xxx operations per
second. It's very strange definition (for me) because there
are 32-bit or 64-bit operations and one operation SCAS can
work longer than ten or more register operations DEC; except
all there are floating point operations and MMX operations
and so on. I can detect speed of my computer in hardstones
and dhrystones. Now a questions:
1. What computers are now "supercomputers" ?
2. How can I detect the speed of each computer and in which units?
3. Is there somewhere sources of the programs that do this?
Thanks to everybody who can answer me,
Mike.
Posted on 2002-01-03 06:54:27 by Mike
Sorry I can't answer the question, but I am intrested to know the answer. :)
Posted on 2002-01-03 12:56:53 by lackluster
Well, I don't know if Cray Research is still around, but their computers were the best known supercomputers.

The Whetstone (from the UK) benchmark was created to measure speed of floating-point.
Posted on 2002-01-03 13:27:58 by tank
Think of it this way:
• The common wrist-watch calculator is more powerful than the ENIAC.
• Your desktop box is more powerful than the mainframes of yesterday.
Posted on 2002-01-03 20:05:57 by eet_1024
Super computers are basically the most powerful computers around...
If there were nothing else available, a Z80 would be a supercomputer!

The majority of people "in the know" will point you in the direction of parallel computing. As tank said, Cray (Seymour Cray the founder of Cray research) was the first real supercomputer engineer. Initially Supercomputers were based on a shared memory model, meaning that all parallelism was achieved by all processors having access to some shared data. To start with the shared memory was Exclusive Read/Write, and later moved on to concurrent reading and writing (processors could access the same memory at the same time, with some arbitration if two processors wrote at the same time).
As a design it is simple from a hardware engineers point of view, but has one major stumbling block. Shared memory model supercomputers will saturate the bus that connects them to the shared memory with 32-64 processors (depending on the task, and the technology involved).

This brought about the vector processing systems that we use today. By not connecting all processors to each other (as the shared memory effectivly does), you reduce costs, and improve scalability. The big problems with vector processing parallel systems is it requires more effort on the part of the programmer, and they are not as versatile. Because data cannot be passed freely, it is necessary to organise the order in which data is processed, and where it is processed (in relation to where it is needed). There are systems that are connected in two, three, and even four dimensions (not actually connected in four dimensions obviously, just connecting processor X in one cabinet to its equivelant in another cabinet)!
Vector processing systems are in theory infinitly scaleable, and are the ones which dominate the modern supercomputing world.

If you are really interested in the modeling of supercomputers the shared memory models are a good place to start, as a shared memory system can easily emulate any vector system (but not necessarily vice versa). In particular P-RAMs (Parallel random access machines), which were designed as a classroom explanation of parallel computers without the hassel of physical hardware (which will have quirks particular to it). Having said that there is a project where a university has built one (the SB-PRAM project http://www-wjp.cs.uni-sb.de/projects/sbpram/ ) which uses 64 processors running at 26MHz, and can emulate up to 2048 processors...

Finally two last things:
1) eet_1024 A wrist watch may be more powerful than ENIAC, but a Pentium II couldn't out perform collosus at cracking the Germans Enigma code in WWII (which was before ENIAC, and was also a programmable computer). So it just goes to show, optimisation of hardware for a specific task will still beat our wonderous modern hardware!

2) A quote about supercomputing:
"Supercomputing is the art of turning a processor bound problem into an I/O bound problem"!

Mirno
Posted on 2002-01-03 21:05:07 by Mirno
http://www.llnl.gov/asci/news/white_news.html

...that is a super computer. :grin:

But distributed efforts have more potential - like SETI@home, or RC5 cracking stuff. Software of the furture will just be designed to operate in a more distributed fasion - that is part of what DotNET is about. There is greater power and continuity in networking rather than super computing. When you have a single chain - it's only a strong as the weakest link, but when you have a network it's hard to say where the weakest link is. :)
Posted on 2002-01-03 21:28:42 by bitRAKE
And an even bigger one is on the way!
Part of IBM's Blue Gene research project, the machine will operate at 200 teraflops, or 200 trillion operations per second, when it arrives in 2005.

Many existing data-intensive applications are slowed down by the time to access information from their memory chips. Blue Gene/L speeds up this process significantly - it's populated with 65,000 data-chip cells optimised for data access. Each chip includes two processors, one handling computing and one handling communication, as well as its own on-board memory.


full article text

I bet you could get some really good FPS on that thing ;)
Posted on 2002-01-03 21:38:14 by Mecurius
Thank you to all. Now I understand that Blue Gene computer
will operate at 200 teraflops and flop is one operation with floating point per second. I have 2 PCs and it's interesting to know smth about they. After running norton's sysinfo and DrHard by Peter Gebhard Software I detect that my old computer based on Pentium 200 MMX operates at 200000 Hardstones / 75000 Softsones / 56 Benchmarks and new one works approximately 5 times faster. Good, but there are no about FLOPs! What program can detect my computers productivity in FLOPs and how can I detect the ditto for my network (one program run there 4 or more tasks on independent computers but output returns to server each 2 seconds)?
Thanks, Mike
Posted on 2002-01-04 00:30:09 by Mike
bitRAKE, the distributed computing of more recent times could in some ways be categorised as supercomputing, but it is not at all parallel!
A large distributed collection of floating heterogenous nodes would be a reasonable description. And for carrying out a whole lot of small tasks it is great! When you want one task run at incredible speeds though it will simply suck ass, because little Joeys PII will be simulating a huge nuclear strike, while the rest of the nodes will be dormant! If there is no direct comunication between nodes it limits what can be achieved (in the same way shared memory systems are more flexible because all nodes are connected).

The real future of home supercomputing lies in things like Beowulf, which provides a corse grained parallelism at next to no cost. The granularity of parallelism is based on the cost of comunicating between processors, and the speed of the processors involved (do they wait long for data from other processors).
You can build your own 2048 processor supercomputer using Beowulf for the same cost (or less) than a 32 processor Cray T3E. It is a bit more tricky to maintain, and will take up a whole lot more space, but it reconfigureable (re-organise the network cable), and in many cases more powerful because of the increase in processor numbers. Only problem is that network cables are absolute rubish compared to the dedicated comunications systems that a real supercomputer will use....
If you design the algo properly though (compute and send data long before its needed to allow for network latency), the effect will be astounding (for the cost).

Its like assembly, when you need the speed, you pay the price, the rest of the time, you can get away with something much cheaper!

Mirno
Posted on 2002-01-04 05:34:35 by Mirno
Anyone with a better than yours.
:alright:
P1
Posted on 2002-01-04 08:13:34 by Pone
Mirno, name one large task that can not be broken down into smaller tasks? Like in assembly, the difficulty comes when the dependancies between these small tasks are very high. Unfortunately, not all algorithms can be redesigned to eliminate these dependancies - you are right.
Posted on 2002-01-04 10:00:17 by bitRAKE
While a C=64 might not be a supercomputer, it is a Super Computer.
Super fun, super entertainment, super retro, super eliteness.
Who needs to perform nuclear fallout calculations on a cray when
you can defeat the evil witch in Bubble Bobble?
Posted on 2002-01-04 10:08:49 by f0dder
Bubble Bobble lol :)

They call that Bust A Move now.. in America atleast. I love that game though :)
Posted on 2002-01-04 11:13:23 by Torch
Yeah, I enjoy playing all the versions on M.A.M.E. - great game.
Posted on 2002-01-04 11:34:16 by bitRAKE
I thought bust-a-move and bubble bobble are quite different?
Ok, so they both involve bubbles and the two merry dragons...
but the gameplay...
Posted on 2002-01-04 11:37:18 by f0dder
I guess it depends on the criterion you use to define a "super computer". A long time ago I worked for a company that had an IBM system 360 and it was a mean piece of hardware. Gangs of HDDs, reel to reel tape, !!!! 64k core storage !!!! and a printer that was so fast it pelted 6 foot wide paper almost as high as the ceiling.

The rough rule is the less you do with dedicated hardware, the faster it gets, true arcade games have very smart hardware to do what they are designed to do but they are hardly general purpose.

Some military chips are many powers faster than conventional computer silicon based processors and it is among other things because they are dedicated to the task they perform.

I am inclined to think that it will take new technology to get any quantum leaps in processing grunt. There is technology in the pipeline for memory that is many powers larger and faster, apparently the technology is atomic in structure.

Bottom line is a super computer is one that is faster than the one you are using to do the job at hand.

Regards,

hutch@movsd.com
Posted on 2002-01-04 16:47:47 by hutch--
----
While a C=64 might not be a supercomputer, it is a Super Computer.
----
shit !
i want to post this when i read the toppic :-)
i have 2 and i play sometimes pirates or watch some old demos.....

greatings to: beyond force ,hotline ,triad ,fairlight ,gcs...
Posted on 2002-01-05 03:45:42 by Max