What is your point...?
Posted on 2004-05-19 11:56:29 by Scali
I don't have a point - is my summary of your statements correct?

I have chosen to program x86 for the next few years, and have made purchases in that regard.
Posted on 2004-05-19 12:04:38 by bitRAKE
The first part of your statement is correct, I have to buy a platform that allows me to use the tools that I require.
The second part of your statement makes no sense to me.

"We cannot know what the most popular future system supported by the tools you use will be".

What is this supposed to mean?
Obviously we can draw the conclusion that systems that are not supported by the tools can never be popular, so those can be ruled out. This leaves only one platform.
So what does this say then? Something like "We know that the only platform that is supported by the tools you use will be the most popular one"?
It doesn't make sense to me.

And why have you chosen to program for x86?
Is it the same reason as mine? Which is something like "I know it's not the best platform for what I'm doing, but since the tools force me to use x86, I will have to use it anyway".
Basically I haven't chosen x86, but the choice was made for me.
Posted on 2004-05-19 12:15:10 by Scali

"We cannot know what the most popular future system supported by the tools you use will be".
It means that you are not in control of the development of the tools you have chosen to use.

Obviously we can draw the conclusion that systems that are not supported by the tools can never be popular, so those can be ruled out. This leaves only one platform.
Wrong, nothing ensures other tools/platforms will not superseed the tools you currently use. So, you are saying you'll follow whatever is popular? You don't strike me as that kind of personality, but I don't know you.

I have chosen x86 because it is what I do now - I might do something else in the future. I try other stuff, too. Might try programming for a phone, PDA, etc.
Posted on 2004-05-19 12:28:41 by bitRAKE
*sigh*
Posted on 2004-05-19 12:37:31 by f0dder
Originally posted by f0dder
But the disk system... what's the big benefit of SCSI these days? ATA133 clearly only reaches the 133mb/s in bursts, and only for very short periods of time - they can still do pretty well sustained, though.

SATA, under laboratory conditions, can achieve 50 MB/sec with well-written code (i.e., you're only going to realistically get half this under normal application conditions). This is pretty much the physical limit for a decent hard drive mechanism. PATA is not as fast as SATA, for a variety of reasons.

Can a single SCSI disk really reach "insanely high" transfer rates,

"Single" is the key. As has been pointed out, the mechanisms used in modern disk drives are *mostly* the same. There are *some* differences that affect performance, however. As the rotational speeds increase, drive manufacturers tend to use fewer tracks on the disk (and, alas, the ones that they stop using are the ones near the outer edge, where you get the higher linear speeds and the greatest data transfer rates; though overall, you do get better transfer rates than on a slower disk). Nevertheless, the "funny" drives that are optimized for performance normally appear in the high-end SCSI and SATA units, not in the "every penny counts" PATA drives. So although they use the same physical mechanisms, the higher-end drives are configured differently to obtain more performance, perhaps at the expense of capacity.

As to whether a single SCSI disk can achieve an "insanely high" transfer rate, well, there are many other factors because the bit rate off the disk's surface to consider here. We'll see some of those shortly.


or are the performance benefits other things like much shorter seektime (better than 9ms for an average IDE disk), or perhaps less CPU usage (by smarter interface to the CPU)?

High-performance disk drives tend to have faster average seek times mainly because they have fewer tracks! E.g., a 15,000 RPM drive generally doesn't use as much of the disk's surface as a 7,200 RPM drive (it has something to do with aerodynamics, flying heads, and junk like that). Of course, on *really* cheap drives, they use lower quality solinoids and stuff like that, but I'm considering SCSI, SATA, and PATA drives built on the same basic mechanism here.


Unless you have some very smart integrated controller,

Guess what SCSI is :-)


you're never going to reach more than some ~130mb/s on a regular PC anyway - since the PCI bust limit is 133mb/s (for 32bit PCI32 - I know there's other PCI standards, but that would usually mean pretty expensive server board :).)

Yes, higher performance motherboards do cost a bit more money. But that's really irrelevant anyway, ATA/133 doesn't come close to achieving 133 MB/s throughput, SCSI 320 can (in an appropriate configuration).

Note that most modern high-performance drives seem to max out these days at about 50 MB/sec. That means that you need to use a RAID configuration to get better performance. Though RAID software does exist for SATA and PATA, SCSI raid is *much* better. Why? Several reasons - first, you can buy Adaptec boards, plug in two drives, and automatically get RAID0 operation with no special software. The intellegence on the controller handles everything (rather than having to waste CPU cycles doing the RAID processing). Also, SCSI controllers are *smart*. They can handle multiple requests concurrently and automatically optimize disk head searches and data transfer operations without having to waste precision CPU cycles on this. IOW, the OS simply sends a list of r/w operations to the controller and then goes about its business handling other processes rather than wasting compute cycles figuring out the best way to access blocks on the disk. With PATA, the CPU has to handle all the smarts and the OS has to handle the scheduling of the I/O requests. I am not 100% sure, but I believe SATA has the same limitations - it uses the computer's CPU to reduce component costs on the controller, but you pay a heavy cost for this in a multitasking environment. Great design decision back in the DOS days, when there really only was one stream of I/O requests. But SCSI kicks butt in multitasking designs.


As for the whole 64bit deal, I think it's completely superfluous for normal desktops - ie, office, outlook, IE.

This takes me back! I remember people claiming that eight bits were good enough for word processors, etc., that we didn't need 16-bit processors, because word processors only worked with data one character at a time. :-)


A 64bit instruction set isn't that useful for 'normal' applications either, just how often do you need it (...for stuff where performance matters)? And even if you can use SIMD stuff, not everything can (or will) be optimized for that.

A 64-bit instruction set might not be that useful for *today's* applications. However, history has shown us over and over again that as the CPU architectures push the envelop, they enable programmers to write software that simply wasn't possible before. The ability to address more than 4GB of RAM is one of those "enabling" technologies that will open up a wide range of new applications that we wouldn't dream of today (video processing comes to mind here, as 4GB of RAM is only about 20 minutes of video, so being able to shove 64GB RAM into a machine would open up some interesting TiVo-DVR type applications).


Increased memory space is sorta nice, but not that important for the majority of people either.

Until they discover what this means with respect to TV, cable, VCRs/DVRs, and so on.


Ok, one application can open multiple huuuge files with memory-mapping, that's a bonus for lazy programmers. And for databases and video editing, it can be an improvement too. But for a lot of other stuff, a 4GB address space is just fine... consider a terminal server. You can stuff in 36gig of ram with PAE mode, and if the x86 is built with NUMA architecture this can even happen without big memory bus problems - and each terminal client won't need even a gig of ram.

Again, you're trying to fit yesterday's applications onto tomorrow's machines. Once we have the address space available, you'll be amazed at the kinds of applications that will pop up. In particular, I'm looking forward to "customizable video" where you can supply your own avatars (e.g., 3D images of yourself) and the video will automatically substitute *you* as the star of the movie. I'm looking forward to being able to put my entire CD collection into RAM so I can run database searches on songs to find common riffs, metres, and so on across my collection. And I'm looking forward to the day when most of my data sits in RAM, rather than on disk (using the disk only as a backing store) so that I get blazing fast access to all my data. Today's processors don't allow this, 64-bitters will.


I'm a bit annoyed that AMD released their hybrid 32/64 bit stuff. It will force us to stick with the aging x86 technology even longer... consider if we had moved to a pure architecture (if we cared).

I'm pleased they've done this (and have forced Intel to do it). I've got *waaaaay* too much money invested in x86 software to start all over again. It's much cheaper to buy a new box than to repurchase all my software.


The core stuff (os + support programs) would be ported, even if IE, outlook and office would run quite fine in emulation mode.

Porting is not the problem. Paying for those ports is the problem. You think Adobe is going to give me a free update to Framemaker on a different CPU? You think MS will do this with Office? Think again.


The CPUs should be plenty fast enough to handle legacy apps... and anything that matters, which should be the only reason to move to 64bit, would of course be recompiled and take full advatange of the system. But now we're stuck with 32bit x86 code for even longer, and the 64bit part of the AMD-64 *probably* (haven't seen benchmarks) isn't as good as if it had been a pure 64bit breed.

There is no question that the 64-bit stuff in the AMD CPUs isn't as good as a new design. The same could be said about the 32-bit extensions to the 386 from the 286. It's all about preserving investment in software. And as the Itanium has shown, people are *not* interested in spending big bucks for hardware that runs all their existing software *slower* than cheap hardware.


So, 64bit for the masses? Why bother.

They said the same thing about 16-bits, they said the same thing about 32-bits. It's not surprising people are saying the same thing about 64-bits. Heck, I say we leapfrog the whole mess and go straight to 256 bits :-).

Cheers,
Randy Hyde
Posted on 2004-05-19 12:51:26 by rhyde
It means that you are not in control of the development of the tools you have chosen to use.


Yes, and this is exactly the reason why I despise x86-64. The masses are in control, and if the masses go x86-64, this means that I and my tools have to follow suit.
.NET would give me this control.

Wrong, nothing ensures other tools/platforms will not superseed the tools you currently use. So, you are saying you'll follow whatever is popular? You don't strike me as that kind of personality, but I don't know you.


Other platforms have already superceded the platform that I use, that is exactly the point. But I cannot use those superior platforms because the tools aren't there.
Also, tools such as 3dsmax are so complex that it's not a market where some unknown player will just come out of nothing and bring a product that wipes 3dsmax off the map. So if it will ever happen, we will be able to see it coming long before.
Also, I do not use 3dsmax because it is the most popular (I don't even know if it is). I use it because my artists prefer it over others, and my own tools are now partly dependent on 3dsmax. I would have to rewrite those for another program, which will take time, and money. Basically we're "locked-in" with both software and hardware. I don't really mind the software, because the tools and OS that I use, are the best available for the job. But the hardware isn't, and that's rather sad.
So I follow what is most popular in terms of hardware, but not because I want to, but because the people that make my tools follow the most popular hardware aswell. And I doubt that they chose that freely aswell. They probably just chose x86 and Windows because that is by far the largest marketshare.

I have chosen x86 because it is what I do now


... If you chose Itanium, you would have chosen that because it's what you do now?
At any rate it will not be a technical choice, I suppose. Mine isn't technical either, it's a strategic one I guess.
Posted on 2004-05-19 13:01:58 by Scali
another_old_member, I don't believe in positioning myself within the world for I am already here. I do what brings me enjoyment (and don't take that the wrong way - I enjoy the struggle).
Posted on 2004-05-19 13:13:09 by bitRAKE
Until they discover what this means with respect to TV, cable, VCRs/DVRs, and so on.


What about those 'insanely fast' SCSI drives that you spoke of? Those are often used today. Harddisk recording works fine. Why would you want to use expensive memory for this? Harddisks are fast enough to record or playback video in realtime, and you only need a small memory 'window' to edit/process the data. ...

In particular, I'm looking forward to "customizable video" where you can supply your own avatars (e.g., 3D images of yourself) and the video will automatically substitute *you* as the star of the movie.


Why would you need 64 bit for that? You'd just need a few mb to store the mesh and textures of your avatar, and a 3d accelerator can render it in realtime and put it into the movie, one frame at a time. I don't see anything that requires a lot of memory.

I'm looking forward to being able to put my entire CD collection into RAM so I can run database searches on songs to find common riffs, metres, and so on across my collection


Again, a complete waste of memory. Secondly, these suggestions would also work if you had a ramdrive. You don't actually need to have it in mainmemory. Wouldn't work very well anyway. What will you do, load your entire CD collection from harddrive into mainmem everytime you boot?

And as the Itanium has shown, people are *not* interested in spending big bucks for hardware that runs all their existing software *slower* than cheap hardware.


...Itanium wasn't a success on the desktop because it was never aimed at the desktop. It was aimed at a segment where no x86 was ever used anyway. Large corporate servers. It also had a pricetag to match this segment. There was no issue of running x86 software anyway, because they weren't using x86 software before Itanium either. They were using SPARCs, PA-RISCs, POWERs etc.
There has not been an Itanium yet, which is aimed at the average desktop user. ...

They said the same thing about 16-bits, they said the same thing about 32-bits. It's not surprising people are saying the same thing about 64-bits.


No they didn't. 8/16 bits was very limited for both arithmetic and memory addressing (hence segmented memory models).
32 bit doesn't have these limitations for most applications, and therefore the need to move to 64 bit is much smaller than the need to move to 16 or 32 bits.
To take your example... even a wordprocessor would be limited by 16 bits, because you would need more addressing space even for opening the average letter that is over 64k. And if you want to implement a word counter or such, 16 bit would not be sufficient either. 32 bit on the other hand... I doubt that there are many people who will ever write a letter of more than 4 gb in their lifetime, let alone a letter with more than 2^32 words in it.
...
Posted on 2004-05-19 13:18:10 by Scali


Unless you have some very smart integrated controller,

Guess what SCSI is :-)

Well, I was thinking along the line of an integrated controller on a separate bus or something, rather than a plugin PCI card.


ATA/133 doesn't come close to achieving 133 MB/s throughput

Except if you stripe enough of them :p. I believe some of the high-end adaptec boards for SATA (and probably PATA as well?) has the RAID logic onboard too, rather than the semi-softraid most cheap units use. I know a guy who built a 500gig RAID-5 array with SATA disks and an adaptec card with... was it 64 or 128 meg cache? Sounded pretty sweet - until it shat itself. *not* fun having to destripe a 500gig array to save data.


With PATA, the CPU has to handle all the smarts and the OS has to handle the scheduling of the I/O requests.

Ah yes, the 'elevator sort' thingy. So SCSI re-orders requests, and the OS isn't guaranteed to get data back in the order it requested it? Sounds like an improvement over S/PATA, although I don't know how many CPU cycles are spent sorting read/write operations.


This takes me back! I remember people claiming that eight bits were good enough for word processors, etc., that we didn't need 16-bit processors, because word processors only worked with data one character at a time. :-)

The difference is that even 16-bit felt limiting rather fast. For moderately large projects, you had to construct various memory management schemes. Then came 16bit protected mode, and it didn't feel that much different - you still had to worry about segment spanning, locking/unlocking memory, etc.
With 32bit pmode, these limits disappeared - except for some *very huge* data processing, but that's mostly done streaming anyway, so not too bad. What I mean is that, sure, 64bit address space is nice, but it doesn't fix any limit as serious as 16 vs. 32.


so being able to shove 64GB RAM into a machine would open up some interesting TiVo-DVR type applications).

Well, we can already shove a lot into x86 with PAE - how much do you need to address *at once* is the question :). But yes, video editing and large database systems will probably benefit from 64bit address space.

My concern was mostly regular users, though. Could easily do with a 1ghz PIII... even XP doesn't require that much, as long as you have enough RAM and a decent 2D hardware accelerated GPU (even a RIVA TNT would do here).


I'm looking forward to

Heh, okay, those kinds of things. Guess we'll also see games using 32768x32768 textures, etc. Still seems overkill for everyday use, but perhaps I'm just conservative - and I can't see why many of these things can't be done with todays technology... insane amounts of ram will just allow developers to be more lazy ;)


Porting is not the problem. Paying for those ports is the problem. You think Adobe is going to give me a free update to Framemaker on a different CPU? You think MS will do this with Office? Think again.

Then run in JIT/emu mode on native 64bit hardware until you have the cash to upgrade? If you don't get native versions, you're not benefiting from a 64bit CPU anyway.
Posted on 2004-05-19 13:23:36 by f0dder
Guess we'll also see games using 32768x32768 textures, etc.


Such detailed textures will not be useful until we have screen resolutions that can match them. Else you'd only use the lower mipmaps of the textures anyway, and you can drop the high levels from memory altogether. I don't see us getting such high resolutions anytime soon. Resolutions have scaled extremely slowly over the years.

Then run in JIT/emu mode on native 64bit hardware until you have the cash to upgrade? If you don't get native versions, you're not benefiting from a 64bit CPU anyway.


Excellent point.
Posted on 2004-05-19 13:28:48 by Scali
The highest resolution monitor I've seen so far is a 3200x2400, that was sweet! The pixels were crystal clear, and you could read the text (even though it was still the standard windows setting, 10 point or whatever it is). Unfortunatly it needed dual-head to run one, and even then it would only hit 44Hz refresh.

Back to the original point (dual processor systems in the home), I think a good way to go. Consider that Joe Average is now running MP3 player, AV software, a firewall, AND their app now (as well as maybe messenger type stuff, and all the spyware they don't know about), you've got one processor for the background tasks and one for the game you want to play. And of course once they become the norm, software will be written for dual processors.

AMD64 currently doesn't offer 64 bit addressing BTW, they've only got a 48 bit internal address bus. But thats just me being pedantic.

Mirno
Posted on 2004-05-19 14:07:26 by Mirno
you've got one processor for the background tasks and one for the game you want to play.


All those background tasks take about 0% CPU on my system anyway... I don't think anyone needs a second CPU to play an mp3, certainly not a complete x86 CPU. Don't soundcards accelerate mp3 replay these days anyway?

And of course once they become the norm, software will be written for dual processors.


I think you missed the part of the discussion where it was mentioned that it is impossible to write most software for more than one processor anyway, since the algorithms can not be parallelized.
So, as said before, faster CPUs are more interesting, because they get more gain. Going dual-core is just the second-best thing, if CPUs cannot be made to go faster anymore.
Posted on 2004-05-19 14:23:25 by Scali
Afternoon, All.

Is it currently possible to code an application (which is multithreaded) to use the second cpu on a dual-processor system to be running some of the other threads?

i.e. The main process thread running on one processor and the user-input and sound threads running on the other processor?

~~~~~~~~~~~~~~~~~
So far for 64bit we've got:
1) lots more memory can be addressed.
2) better for video editing (because...???).

Ummmm....
Soze. I still don't see it.
I have no application which would ever require access to such large amounts of memory.
For video editing you never edit the entire friggin' video at the same time.
What possible new applications would be built for use on these amazing 64bit systems?

Cheers,
Scronty
Posted on 2004-05-19 18:38:45 by Scronty
My decision is based on a development standpoint. I don't plan to develop user applications for such powerful systems.
Posted on 2004-05-19 20:46:29 by bitRAKE
Commenting on current SATA drives from Seagate using an Intel board designed to use this technology I would not lose any sleep over NOT running them as raid0 as the performance was not that good. Neither was the win2k / sp4 striping so I set them up as seperate drives like normal.

My last box used an Adaptec 7890 controller with two 18 gig Quantum scsi drives and while the data transfer rate was higher with a few of the later EIDE drives I had, the scsi disks were far better in terms of running compilers and linkers as the controller card handled the logic instead of passing it to the main CPU.

If I have Scali's view correct here, he is saying that under current demands, high end 32 bit hardware is adequate for the job if you have the appropriate peripherals like video cards and disk IO and this is of course correct but it has no eye for the future where this capacity will be bypassed in a reasonably short time.

The factore shaping the current market is consumer demand and it is here where consumers exercise the only leverage they have, if it does not suit what they want, they just won't buy it so no matter how many corporations want to manage a monopoly of their own, if no-one buys their stuff, it will not happen.

We all know that x86 architecture is ancient but it has been tweaked over many years into defacto RISC while maintaining the compatibility with the old stuff and it is far faster and more powerful than the original hardware. I am yet to be convinced that the AMD Opteron will become a market leader as there is little chance that Intel will follow and build a compatible 64 bit processor but the basic architecture is there and it is simply a matter of applying later technology to get that type of capacity a lot faster again.

New hardware means new horizons in software development and while the puke generators will waste much of this capacity with sloppy code, new applications will develop that actually do use the extra technolpogy.
Posted on 2004-05-19 21:26:23 by hutch--
There is an interesting thread over in C.L.A.X. entitled "Hyperthreading Goal". I strongly recommend that anyone who has some questions about the benefits of HTT take a peek at this thread. Lots of interesting comments (along the lines of "yeah, like a cache HTT may speed up your code, but code that wasn't designed for HTT may actually wind up running *slower*").

An interesting quote:

A typical processor spends 30-70% of its cycles stalled waiting for memory
accesses. The idea behind SMT (marketed by Intel as HT) is that with two
threads available, odds are at least one of the threads will be runnable at
any given time, increasing the overall execution speed. Unfortunately,
Intel botched the implementation so seriously that it rarely results in more
than a 10% performance gain, and often actually worsens performance because
it increases cache thrash.

Wander over to comp.arch (and read the archives) if you want to know more,
as it's a constant point of discussion.

Cheers,
Randy Hyde
Posted on 2004-05-19 22:26:31 by rhyde
Originally posted by Scronty
[
Ummmm....
Soze. I still don't see it.
I have no application which would ever require access to such large amounts of memory.

Applications generally don't get written until the technology is available to support them. It is no surprise you don't have any applications that don't use this much memory. The day such memory is commonplace, however, such apps will arrive. Remember the good old days of DOS when apps couldn't use the 16 MBs you could put on your 286 box?


For video editing you never edit the entire friggin' video at the same time.

Oh yes I would. Much faster response, particularly when jumping around through thumbnails, when the entire video is in memory. If I do two minute videos and run everything out of RAM, editing is a pleasure. If I do a half-hour video and I'm constantly going to disk, uggh!


What possible new applications would be built for use on these amazing 64bit systems?

That's the big question. We'll never really know until they build them. 'Cause no one's going to bother written applications that won't work on current hardware (well, with the possible exception of Windows :-)).
Cheers,
Randy Hyde
Posted on 2004-05-19 22:32:29 by rhyde
You already have applications that need 64 bit or larger, massive databases are one, video editing is another and there is the whole box and dice of multimedia that still performs poorly on the latest x86 32 bit hardware. Produce the hardware capacity and someone will find a use for it, just as it happened when win95 was introduced. The tools will be lousy for some years and the real performance will not be there for some time but to cater for the next generation of far more powerful applications, the only thing that wil make it possible is hardware with a lot more grunt.
Posted on 2004-05-20 02:39:51 by hutch--
To put it another way: Most people already don't know what to do with the current grunt and memory.
I think it's safe to say that by far the largest group of users mainly uses their PC for websurfing, email, and Office applications, and perhaps playing mp3s or videos.
All this works fine on a very simple PC, say a PII 333 with 128 mb.
Yet most people already have a 2+ GHz machine with 512 mb. They are just wasting most of their resources already.
64 bit would just be more waste for them.
Posted on 2004-05-20 05:16:59 by Scali