...
Posted on 2004-05-20 18:34:23 by Scali
Oh, and CAD is not 'line drawing programs'.
If you come up with a way to calc the minimum distance between two capped cones of arbitrary position, orientation, angle and size, let me know.
...
Posted on 2004-05-20 18:35:48 by Scali
Take 3.

HAY MAN,

I have just hooked this blazingly fast 386SX running at 20 MILLION cycles a second and it is so advanced it can actually handle hardware multitasking if an operating system is ever developed for it. Its got 8 MILLION bytes of memory and a whopping 40 MILLION bytes of disk space. Why would you ever need more computing power than this, the potential for programming in up to 256 colours with a screen upgrade rate of under 1 second is truly mind blowing. This technology is so advanced it will take years to catch up to it.
Posted on 2004-05-20 19:17:18 by hutch--
Does everyone here have 4GB of memory? I guess not, so they're not at the limit of 32-bit.. and memory isn't exactly cheap. In other words, 32-bit isn't what's stopping most people having more memory is it?
Posted on 2004-05-20 19:48:07 by stormix
Let me make it clear that I stated that MOST people do not need 64 bit YET.
That means that there are SOME people that may need 64 bit NOW. And it could also be that EVERYONE may need (? use anyway) 64 bit EVENTUALLY.

The point is that x86-86 is not useful to MOST people AT THIS TIME.
I haven't heard any good arguments against that one yet.

Obviously some people need 64 bit, and some people have been using 64 bit long before AMD even thought of developing x86-64. Generally those people want the performance of 64 bit, and AMD is not what they want, because it is too little, too late.

And obviously if you put obscene amounts of memory into a computer, there are always ways of filling it up, as randall.hyde mentioned. That does not mean that these ways are actually useful to anyone though. Some people build houses from popsicle sticks. How many people live in such houses though?

... omg! I've found a post which doesn't require editing!...
Posted on 2004-05-20 20:12:37 by Scali
I agree. And shirley you could fit a half hour of video into 4GB (if you wanted)?
Posted on 2004-05-20 20:15:58 by stormix
You don't need to fit video into memory. A video is a stream, and it is read in a linear fashion. A harddisk is fast enough to stream it while it is being encoded or decoded in realtime (there are actually special A/V harddisks specifically for these tasks. Main difference is that they don't need to recalibrate because of small temperature changes, like regular drives. This means they have a higher sustained throughput. They are also generally high-performance anyway, and expensive ofcourse. Such devices are still much cheaper than having actual DRAM though, and they will be for a long time to come. 64 bit may be nice, but harddisks are way cheaper than DRAM).
And if you need 'random access', you just need to cache a few strategic points that you can seek to immediately, and to make it even faster, cache a tiny part of the video from there aswell, that way you can play from memory while the harddrive seeks the particular part in the stream, and after the cached part runs out, the harddisk takes over streaming.

Besides, one could argue that MOST people do not do video editing, therefore the example is not very useful. Video editing still requires rather expensive hardware and software anyway, so it is not for everyone (yet?). It's just as useful as arguing that heavy corporate databases need big, fast 64 bit machines with lots of memory. It may be true, but it doesn't apply to most people. It won't make me decide to buy such a machine anyway.
Posted on 2004-05-20 20:20:42 by Scali
*skipping a lot of posts because they're plain junk*

I really wished scali could behave himself, because he has a lot to offer... more than a lot of other people. But christ, even I can come off better than you scali - and that's scary :O. Too bad, considering how other talking heads come off >_<

I agree with like 90% of scali's points though.
Posted on 2004-05-20 20:42:26 by f0dder
What did I do now, and what's wrong with the 10%? :P
Posted on 2004-05-20 20:45:09 by Scali
Oh, if we all could make points it would be so easy to chose a winner and end this miserable existence. :)
Posted on 2004-05-20 21:27:00 by bitRAKE
Afternoon, f0dder.

If you look through past posts you'll notice editing in most of them - not just Scali's posts.

A note to all:
I'm beginning to find it very difficult to decide what to cut/stay in some posts. Please re-read the board-rules regarding having respect to other members. This means no insults (even subtle insults).

Cheers,
Scronty
Posted on 2004-05-20 22:02:08 by Scronty

Let me make it clear that I stated that MOST people do not need 64 bit YET.
That means that there are SOME people that may need 64 bit NOW. And it could also be that EVERYONE may need (? use anyway) 64 bit EVENTUALLY.

The point is that x86-86 is not useful to MOST people AT THIS TIME.
I haven't heard any good arguments against that one yet.

Fine. I agree 100% with this.


Obviously some people need 64 bit, and some people have been using 64 bit long before AMD even thought of developing x86-64. Generally those people want the performance of 64 bit, and AMD is not what they want, because it is too little, too late.

I agree 100% with this, too. But it's not because AMD did such a lousy job on their chip design (which they may very well have, I don't know as I haven't looked closely at it). The big problem is that there is no software today that can take advantage of the features. Yep. MS *does* have a 64-bit OS and so does the Linux crowd, but there are very few *practical* applications that can take advantage of those features today. For example, I can't run Adobe Premier (or whatever PC-based application is best for video editing, I don't know as I use a Mac for this purpose) and be able to load entire projects into memory because the apps are still 32-bits.


And obviously if you put obscene amounts of memory into a computer, there are always ways of filling it up, as randall.hyde mentioned. That does not mean that these ways are actually useful to anyone though. Some people build houses from popsicle sticks. How many people live in such houses though?

The 386, with the ability to address up to 4GB of memory, first appeared in 1985. Win95, the first *practical* release of Windows that had application support for the 32-bit model did not appear until 10 years later. Yes, the hardware *always* leads the software. No one is going to use a 64-bit CPU today. But if AMD wasn't busy putting one out today, that would simply push the adoption date of 64-bit technology that much farther into the future. Maybe the x86-64 isn't the right way to go from an aesthetic or architectural point of view. But Intel has proven over and over again that people aren't willing to abandon their investment in software. They *will* upgrade to a faster machine that runs their current software faster and better and holds the promise of better software in the future. It's a much more difficult task to convince them to upgrade to a machine that runs their current software *slower* with the promise of better software tomorrow. Apple pulled this off with the PowerMac, but Apple also has a user base that is quite religious about their machines and the PC base doesn't have that (i.e., I don't see a lot of people switching to Itanium, which is the only other game in town for 64-bit PC-compatible computing these days).

Thank God for AMD! They forced Intel to create plans for a 64-bit x86 machine. They've accelerated the march to 64 bit machines which have been languishing for desktop use. Given the 10-year gap between technology introduction and acceptance, I wish AMD had done this five years ago :-)

Yes, as software engineers, our life is getting miserable again. We've got to go through another upheaval like we did going from 16 bits to 32 bits. But just compare the software we had pre-32-bits to the software we have today. And that will give you an idea of the kind of software we can expect when 64-bits hits its stride.

Me, I'm still waiting for 256 bit processors!
Cheers,
Randy Hyde
Posted on 2004-05-20 22:34:45 by rhyde
Scronty, so it is allowed as long as it is so subtle as you can no longer decern it is an insult. Hiroshimator said another_old_member is rude and I find that insulting. :)
Posted on 2004-05-20 22:35:57 by bitRAKE
Hi Scronty, if you are going to edit my posts please just remove them completely. I did intentionally call him an idiot and I would rather have the post removed than edited.

PS I took the liberty of removing them myself.
Posted on 2004-05-20 22:40:22 by donkey

I agree. And shirley you could fit a half hour of video into 4GB (if you wanted)?


Figure about ten minutes for every GB of memory. Depends on the compression rate, of course. But for video editing, keep in mind that you typically use 3-4x as much storage as your final product. First, you have to have all the individual files that compose your final product. Their aggregate is usually much larger than the final video (i.e., you may shoot 1 minute of video and only use 10 seconds in your final project). Yes, you *could* preedit the files to make them smaller, but the whole point of having all that memory is so you don't have to waste time on such side projects. Also, you'll need enough memory to hold the entire rendered project (as the input files may not use the same codec as your output, so you can't simply use pointers here).

Playing back video, well depending on the compression rate you could get about 1/2 hour of video into memory. But that's not sufficient for editing purposes.

BTW, the thought occurs to me- another great reason for having tons of memory (video related) is for DVR purposes. Imagine recording three or four shows off cable simultaneously, processing them in some way, and storing the result. Could be lots of fun!
Cheers,
Randy Hyde
Posted on 2004-05-20 22:52:15 by rhyde
Originally posted by another_old_member
You don't need to fit video into memory. A video is a stream, and it is read in a linear fashion.

For playback or recording of a single stream, this is true. It is not true when you're editing a project and you have dozens or hundreds of different files you're working with. Also, when you are doing *non-linear* editing, you tend to jump around in the file a lot. When you do scrubbing and fast-forwarding, you can easily exceed the disk transfer rate, even with RAID.


A harddisk is fast enough to stream it while it is being encoded or decoded in realtime (there are actually special A/V harddisks specifically for these tasks. Main difference is that they don't need to recalibrate because of small temperature changes, like regular drives. This means they have a higher sustained throughput. They are also generally high-performance anyway, and expensive ofcourse. Such devices are still much cheaper than having actual DRAM though, and they will be for a long time to come. 64 bit may be nice, but harddisks are way cheaper than DRAM).

Yes, hard disks are way cheaper than RAM. They make *great* backup store for all those files you've got sitting around in memory that you can't lose once you need to start another project. But if Moore's Law continues to succeed for DRAM (which is the one category that has really maintained ML over the years), we *can* expect machines to commonly ship with 16 GB of RAM in about 2-6 years. And building a machine with 64GB won't be out of the question. 64GB is a reasonable amount of memory to edit a 1/2 hour to one hour video totally in RAM (and even have a little bit of memory left over to do things like Photoshop editing on large images you need to stick into the video without having to first quit the video editor).


And if you need 'random access', you just need to cache a few strategic points that you can seek to immediately, and to make it even faster, cache a tiny part of the video from there aswell, that way you can play from memory while the harddrive seeks the particular part in the stream, and after the cached part runs out, the harddisk takes over streaming.

That requires a certain amount of prescience that the software probably won't have. And don't forget, caching can actually *slow* down processing in many cases (all that data copying...).


Besides, one could argue that MOST people do not do video editing, therefore the example is not very useful. Video editing still requires rather expensive hardware and software anyway, so it is not for everyone (yet?).

????
Expensive is in the eye of the beholder. Seriously, for "somewhat less than professional" results, an iMac and a miniDV camera is about all you need. That's not *that* expensive. But let's talk semi-pro results (like the stuff that I do). I believe that "yet" is the operative word. Let me retrace my history with computer-based video editing:

1. I bought an old video capture card and an Audiomedia board for my Mac IIFX back when QuickTime first arrived. I actually did some low-quality (15fps) commercials with that set up. My camera (about ,000) was, by far, the most expensive part of the system.

2. System speeds increased. I bought a Mac G3, Canon XL/1, Sony DSR-20, DAT (what a waste), and other goodies. For the K or so I spent on this system and other goodies, I had the equivalent of a K-0K system when I first started all of this (see [1] above).

3. Today, you can get reasonably cheap miniDV camcorders and that's all you need beyond a firewire port on your PC. Heck, decent video editing software will cost you more than the PC or the camera these days!

Still, I do agree with your basic assertion - it's not for everyone. But that has nothing to do with cost. It has to do with the amount of effort needed to do video editing. This isn't a task for those desiring instant gratification. To produce an amateur quality "home" video probably takes about 30 min to one hour of effort for every minute of finished result. To produce a semi-pro result can take quite a bit longer. Not being a professional, I can't personally suggest how long a professional job would take. I do know that with more memory and a faster processor, my own semi-pro efforts could get done in about half the time. So rather than taking 40-80 hours to edit a five-minute music video, I could probably do it in 20-40 hours. Yes, I would pay more money to save that amount of time on each project I do.

Now, consider a little idea I presented in an earlier post- real-time avatar editing of a video during playback. A person could get a (large) set of 3D images of themselves to store on their computer (and, ultimately, in RAM), and the computer could substitute their image during playback. Yep, that could eat up a lot of RAM.

Consider another application - gigasample music files for synthesizers. It's not uncommon for a single set of samples for a given instrument to consume 1 GB of memory on the disk. For recording purpose, who cares about disk transfer rates (because you don't need real-time, particularly). But for live performances, if you want to simulate the better part of an orchestra, having all those samples in RAM would be a very handy thing to do. This is *not* streaming data we're talking about here - it's random access.

Finally, let me offer a more mundane task - normal application use. Even on my relatively fast machine there is a brief (to sometimes long) pause whenever I run a new app. Gee, if I had all the apps I normally use sitting around in memory, those delays would go away. Gee, if I had a ton of memory, VM thrashing (which I *still* experience on my 2GB machine, on occasion) would go away (then again, I also remember people making those predictions back when they were imagining machines with 256MB of memory :) ). Gee, I could even cache *all* the data files I normally use on my PC (which, btw, is not where I do video editing). The hard disk would become a backing store; defraggers would become a thing of the past with disk processing being more concerned about how fast a particular block could be written to the disk rather than worrying about how long it will take to read that block back from the disk later on (e.g., writing the data to blocks closest to the disk head's current position rather than worrying about sequential allocation of file blocks). This is exactly the kind of stuff that would work great for an individual and would not work at all for a large corporate database, btw.



It's just as useful as arguing that heavy corporate databases need big, fast 64 bit machines with lots of memory. It may be true, but it doesn't apply to most people. It won't make me decide to buy such a machine anyway.

It's the new applications that the 64-bit technology will produce that will convince people to buy new machines (you and anyone else). Today, people don't need these machines. Tomorrow, they will. And that means that software developers should be salivating over the 64-bitters today, so they can get a head start on the applications that will be needed tomorrow. Betting on the x86-64 or Intel's version is relatively safe, given past history. While the Itanium may give the HPs and the Sparcs a run for their money, it's not going to do much for the desktop user, I'm afraid. What's the other alternative? PowerPC? Something brand new with no software available for it? Just sticking with the 32-bit x86 that seems to have run out of gas?
Cheers,
Randy Hyde
Posted on 2004-05-20 23:28:32 by rhyde

Does everyone here have 4GB of memory? I guess not, so they're not at the limit of 32-bit.. and memory isn't exactly cheap. In other words, 32-bit isn't what's stopping most people having more memory is it?

Motherboards and OSes are the main problem here. In theory, for example, the latest Pentium ilk could actually address up to 16GB of memory. But few OSes support the segmentation facilities needed to make use of this. For technical reasons (fan-out, skew, propagation delays, etc.), few chip sets support more than about 4 memory modules and, using technology of the past few years, more than 2GB. That is rapidly changing as we're about to go through another Moore's Law cycle and find out that they double the number of transistors on the memory chips. 2GB memory modules are just around the corner, IOW. The problem then, will be, that you won't be able to plug them into a typical motherboard.

And yes, the next generation of motherboards will probably feature 2GB of cheap memory, if history is any indication. That means that if you're a software developer who wants to stay ahead of the curve, you should invest a few bucks and start moving towards the next generation; sooner rather than later. I realize that this is an assembly newsgroup and some people consider this to be all "bloat" of code. But the truth is, sometimes the way to *faster* software is by consuming more memory for data structures that reduce processing time.

The point is, we're right on the cusp of switching over to a larger address space based on the cost-effectiveness of memory. Those of us who've been around a little while will remember the last time we had a major address bus shift - when we went from the 20-bit/24-bit address bus of the '86/'286 to the 32-bit address bus (with 32-bit displacements) of the 386. Yes, people had a tough time exceeding the 1MB address space of the 8088 (and, later, the 16MB address space of the 286). I don't expect this same ugliness as we switch to 64-bits, because back then people needed the address space in their current apps and that, for the most part, is not true today. But it won't remain true for too much longer.

Gee, do I need 2GB on my PC (which is what I've got installed?). I do all my memory intensive apps on my Macintosh, not the PC. But even so, having that 2GB on my PC speeds up things. Files remain cached longer. Thrashing occurs very infrequently. Could I use more today? Not without an OS upgrade, that's for sure. But all of that is coming with the next generation of software.
Cheers,
Randy Hyde
Posted on 2004-05-20 23:43:33 by rhyde
Damn,

I knew I blew it when I only put 2 gig in my current box but 4x1gig ddr400 was about 3 times the price so I will just have to suffer along with less. :grin: When you have it you find a use for it, 7zip ultra mode compression used over 1 gig of memory with the last archive I made with it and I regularly benchmark algos on 1 gig VOB files so YES I would put 4 gig in the box if the price had have been right.

Take 4. (circa 1990)

MANNNNNNNNNN,

You should see how much grunt my new DX486 has screaming along at 33 MILLION cycles a second. I had it built with 8 MILLION bytes of memory and a truly amazing 300 meg NEC esdi hdd, eats SCSI's alive. Along with that it has a blazingly fast 2k/sec modem so i can rat junk off BBS's and the really big deal surprise is it has a Phillips CD reader thats a heap faster than reading a floppy. It handles nearly 50k / sec. Team that up with an Trident 8900 Video card with a FULL MEGABYTE of video memory and a SUPER vga monitor and you have the all time ultimate in computer power. Why would you ever need more grunt than this when its powerful enough to be a file server ? Windows 3.00 is so fast you almost need to slow it down and DOS runs in fantasy land.
Posted on 2004-05-21 01:29:18 by hutch--
Some things to say...
Firstly, randall.hyde is off about how video editing works. Last time I used video editing software, basically the software batched my operations, and when I was done, it would render all operations to a new file (yes mostly linearly).

The caching also doesn't involve any 'data moving' in this case, other than the first time to initialize the cache. After that, it's static.

As for the 3d avatars, I already explained how they would be implemented, and how this would not take a lot of memory. I find it rude to just ignore that, and reiterate the same point, which is already proven wrong.

And synthesizer samples are very much streaming data. If you do random access on them, you would simply get white noise, or something. You start at a certain point in the sample, and then play linearly, much like with video. And this means that the same caching scheme would work.

And that means that software developers should be salivating over the 64-bitters today, so they can get a head start on the applications that will be needed tomorrow.


That's the point. I am not against 64-bit, but I am against x86-64, because it's just a hackjob, which will effectively set us back 10 years again, with all the legacy support holding it down, while there are sleek, modern 64-bit CPUs on offer aswell. Most of your monologue is about moving forward, and taking advantage of new technology. Then you should actually agree with my standpoint, that we should USE new technology, and not x86 with 64-bit extension patched on, since it is much slower and more expensive than a pure 64-bit one.

Something brand new with no software available for it? Just sticking with the 32-bit x86 that seems to have run out of gas?


.NET would fix the software problem. And I'd rather stick with 32-bit x86 for a bit longer (while we still can), if that means we will eventually move to a GOOD 64-bit system. Going x86-64 now will mean that we will remain locked-in with x86 for longer, and no more direct excuse to move away from it (after all, we already have 64-bit then). And that is why I do not appreciate AMD's move to go x86-64, especially since Intel apparently was NOT going to go down that road.

And I would therefore like to ask... WHY do you have a Macintosh? Because you wanted to choose the best technology for the job? I would like that freedom aswell.
Posted on 2004-05-21 03:31:12 by Scali
hutch, did you just associate Trident 8900 with power? omg!
Posted on 2004-05-21 03:31:40 by Scali