It's so strange, because usually the serious research projects are runnign primarily under Linux. Yann LeCun's Lush and LeNet; FFTW; Singular; etc. There must be a reason for this (which I don't know), and I don't think the money is the reason.

I can think of a couple of reasons... it's not uncommon that universities more or less force their comp.sci. students to run some *u*x variant, because of the machines in the computer labs, as well as the courses they need to follow.

Then there's the elitism aspect; many comp.sci. students are a bit... eccentric, and using linux and cryptic commandlines with a zillion pipes, as well as hand-crafting makefiles, make them feel special :)

And then there's more reasonable reasons, like having source code to tinker around with (useful for people with courses compiler design or OS construction).


Neither both of you did not mentioned anything about my post, in which I explain why I moved to Linux. There is a strong point there, when I try to explain the use of NOT microsoft made production softwares. AutoCAD, ModelSim, Xilinx, etc. (I'm familiar with these ones). Or even MS Office.

I don't have experience with that software, so I cannot comment on it.


If you will start to work with such products, you will find out what windows is able to. Crash after crash also on expensive good, reliable hardware.

Does windows crash, or the individual applications? There's quite a difference. I've never seen a user-mode application take down an NT based system.


The developers of Linux should try to simplify those complicated commandline switches. The syntax of VC compiler switches is not so horrible like that of gcc.

I wouldn't say those commandlines are all that horrible; there's a lot to choose from though, and it takes a while tweaking and benchmarking to get the best results.

Posted on 2005-05-31 16:42:07 by f0dder

Then there's the elitism aspect; many comp.sci. students are a bit... eccentric, and using linux and cryptic commandlines with a zillion pipes, as well as hand-crafting makefiles, make them feel special :)

:lol: :lol: There is an element of truth in it :)

Does windows crash, or the individual applications? There's quite a difference.

Yes, you're right, I did not specify this. The applications are crashing, but the programs recover only after a Windows restart. And this is not uncommon, not only on my computer. You can imagine for example, when a building is loaded with I don't know how many hundred-thousand graphical objects, and you are doing some operations on them, Windows starts trudging. Windows is very good and easy for every day usage, but definitely not rugged, and heavy-duty. That's my problem.
But of course it depends on the personal needs, what tasks do you want to do with your computer. That's why there exist diferent kind of Operating Systems.

I've never seen a user-mode application take down an NT based system.

My high school diploma work ;) I wrote it in 1999. It's a MS-DOS app written fully in ASM, and it uses VESA video modes. I made the VESA drivers, mouse drivers, and graphical functions like OutText, Line, and others. I made a GUI for dos. I was enthusiastic that time :) The program actualy did surface-wave simualtion (animation).
The only problem was, that it did not work correctly with ATI cards. I don't know why. Probable I made some bug somewhere in the page banking.
Once I ran the program under XP, and I did not knew that the computer had ATI card, and the XP hanged up. I couldn't do anything, but reset.  :lol: :lol:
But this was the only case, otherwise the program worked under every computer (>i386), and under every Windows. It never crashed.

I attached a screenshot of the program.
Attachments:
Posted on 2005-06-01 16:23:04 by bszente

Yes, you're right, I did not specify this. The applications are crashing, but the programs recover only after a Windows restart.

This sounds more like a driver issue to me... like serious resource leaks. Fortunately, "normal" resource leaks are recovered, otherwise a lot of programmers would have serious trouble.


Windows is very good and easy for every day usage, but definitely not rugged, and heavy-duty

Humm, win2k web servers with more than a year of uptime aren't rugged/heavy-duty? I have just had 12 days of uptime with constant network activity, lots of messing with visual studio, a good deal of gaming, transcoding DVD-R's, et cetera... the reason I rebooted? I installed some software that required a driver. That's rugged & heavy-duty enough for me :)
Posted on 2005-06-01 16:31:42 by f0dder

This sounds more like a driver issue to me...

I can assure you, that it isn't. As I said in previous post, I take extraordinarily care to drivers, and the hardware is also of good quality and I do realy take care of it. (Boxed Intel D845GBV mainboard with boxed P4).

Humm, win2k web servers with more than a year of uptime aren't rugged/heavy-duty? ... That's rugged & heavy-duty enough for me :)

I believe this. That's why I said it depends on the target. However I realy have the feeling that this (not software) engineering world apps are different question. I don't know why. Also different companies (architectural - I know these through my father), have major problems, I had to do several time technical support. Not to mention when you work with big plotters (A0 size, or 2.5 meter of drawing). I can assure you, that if you take the best (x86) computer on the market, it's still not enough. A print lasts 15-20 minutes. Only the processing of the plot file lasts 5-10 minutes. And when you have to print 10 or more such big plans... And if the plot file gets corrupted (somewhere in the generating process something goes wrong) in the middle of the plot -> you lost time + toner + money. You start again.
But this is a different world. I'm realy waiting my father to quit this, because I'm tired of supporting this. There are so many hazarduous problems, I'm full with them.
Posted on 2005-06-01 16:52:29 by bszente


My high school diploma work ;) I wrote it in 1999. It's a MS-DOS app written fully in ASM, and it uses VESA video modes. I made the VESA drivers, mouse drivers, and graphical functions like OutText, Line, and others. I made a GUI for dos. I was enthusiastic that time :) The program actualy did surface-wave simualtion (animation).
The only problem was, that it did not work correctly with ATI cards. I don't know why. Probable I made some bug somewhere in the page banking.


Or maybe there is a bug in the ATI driver that caused the NT machine to be down. That of course I don't thnik you can blame Microsoft, as the code for the driver is most probably ATI.

I agree with f0dder that the phenomenon that you have described sounds like a resource leak - another bad application programmer is at fault.
Posted on 2005-06-01 19:11:53 by roticv
Both nvidia and ATi have had pretty serious driver issues, btw. NVidia likes to do bus hogging, to squeeze out a few more fps... unfortunately this causes skipping sound with some sound cards / drivers (and perhaps more serious problems as well).

It seems like ATi haven't run their drivers through the Driver Verifier; http://www.asmcommunity.net/board/index.php?topic=17926.0 - software crashes that have the same symptoms as hardware failure.


The only problem was, that it did not work correctly with ATI cards. I don't know why. Probable I made some bug somewhere in the page banking. Once I ran the program under XP, and I did not knew that the computer had ATI card, and the XP hanged up. I couldn't do anything, but reset.

Again, sounds like an ATi video driver error. There might also be a few flaws in NTVDM, I can't rule that out (but at least it's a lot more stable than dos boxes on 9x).

I wonder why you're having so much trouble with engineering software... perhaps the coders are plain lousy? :)
Posted on 2005-06-01 19:29:19 by f0dder
Hey roticv, for my WaveSim program I don't blame Microsoft. I said that it must be my bug.

Again, sounds like an ATi video driver error. There might also be a few flaws in NTVDM

My bug is present also when I run it from noromal MS-DOS 6.22, so it's nothing relateds to any kind Windows. It's related to ATI VESA BIOS. Those ATI cards have a little bit strange behaviour, and I did not handle something in the right manner. On any other video card my program goes well (nVidia, Intel 8xx, the old Trident, Cirrus Logic, etc). I tested on many hardware in that time, ATI's VESA is the exceptions.
I never had any problems with my nVidia card.

I wonder why you're having so much trouble with engineering software... perhaps the coders are plain lousy? :)

I wonder too, but I'm not the only one who is complaining. Another point is that those softwares are very complex (both alghoritmically and mathematically), and they have to do very diversified tasks. Yeah... eveybody is lousy exept Microsoft  :lol: I don't think that Microsoft has only good programmers. Don't forget that the Software industry is a small part of the whole IT business. Computer and software users those who use only the software and do not develop software are significantly higher in number.
BUT I'm keep saying that (at least in my case), under Linux with the SAME software and hardware, the problems are not manifesting.

In better places like Germany and so on, they avoiding this problem by splitting the task (plan) in VERY small parts, to assure that the computers can deal with them, and after the parts are ready, the whole design, building, plan is put together on a bigger graphical workstation that has both software and hardware capacity. Don't ask me what kind of OS and hardware they use, I don't know that.

The best AutoCAD version of all times was the Realease 12, that worked under MS-DOS. It eas fast, reliable, in that time to render a photo might take a whole week(!!), but they were no problems. After they switched to Windows, the problems started to appear. And this problems are increasing by the time. And yes, they have 20 years of experience (10 years under Windows). And the complexity of software did not changed much. In fact it's the same that was in the R12. Under windows the best was the Win NT4 SP6 with AutoCAD 2000. It would be nice to have also a Linux version... But it seems that Autodesk is not thinking in Linux (yet).

Man... I'm just realizing, how far are we now from audioman's question...  :) We could split this thread to at least 3 different:
- VisualC++ vs. GCC
- (the old and classical) Windows vs. Linux
- Software developer's opionion about Windows vs. non software developers opinion ;)
Posted on 2005-06-02 05:03:35 by bszente

eveybody is lousy exept Microsoft

Hehe, never said that... There's a large number of flaws in Microsoft software; they have too many programmers. The kernel team is very competent, though.
Posted on 2005-06-02 05:39:22 by f0dder

The kernel team is very competent, though.

No question about it. It must be very difficult to keep organized that army of programmers. :)
Posted on 2005-06-02 06:19:41 by bszente

No question about it. It must be very difficult to keep organized that army of programmers. :)


Such a giant company should have good commanders to keep up that army. :)
Posted on 2005-06-02 12:28:41 by Vortex

It's very interestig how the performance is varying with the modification of the code.


When optimizing, you should know the bottleneck. You should also be aware of what your code generates and your compiler's flags. If you'll further optimize my snippet (I didn't bother, because its purpose was to be clear and concise) by applying loop unrolling, for example, you can get it to take about 1.5 seconds (from 8.8 seconds) with 5 "rounds" per iteration.
Posted on 2005-06-04 18:41:50 by death

If you'll further optimize my snippet

Yes death, but don't forget, that your snippet is not compatible with my snippet. Your snippet is assuming, that the input vector is just a simple RGBRGBRGB sequence, starting with the first line, and the output vector is also a sequence only. You're right this is a fotunate situation, it can be optimized.

BUT, when I posted the problem, I explained that you should take the parameters as it is. Consider that you are given the parameters this way:
- input: an array of RGBRGBRGB, but starting with the last line
- output: a bidimensional arrray (240 lines, 320 cols)
In that case I don't think you can achieve 1.5 seconds.

Of course it would be even much more easyier, if the initial sequence would be RGB0RGB0, to be aligned on 32 bits, it would be faster. Or if you would use SSE to do the multiplication and sum, it would be even more faster, etc. But this is a different question.
I agree that it would be better to organize the data the way you did your snippet, but there are cases when you have no choice, you have to respect some specifications.
Posted on 2005-06-05 09:54:22 by bszente

Yes death, but don't forget, that your snippet is not compatible with my snippet.


My code is compatible with your data structures. C and C++ don't have real multi-dimensional arrays; they have "arrays of arrays". This means all the elements in your "multi-dimensional" array are contiguous. Now, if I accessed this array using a pointer to non- unsigned char, you might have a point, because in theory, "fat pointers" are allowed, and this access could cause a range check error. In practice, I only heard about one compiler that used to do range checking (Centerline). But, since I do use pointer to unsigned char for access, the issue is mute: compatibility is guaranteed by the standard.

P.S. I missed the fact that your code is "starting with the last line". With some minor changes to my code, it will be compatible... However, I wonder how that will affect performance (e.g., with regards to caching).
Posted on 2005-06-05 12:54:05 by death

C and C++ don't have real multi-dimensional arrays; they have "arrays of arrays". This means all the elements in your "multi-dimensional" array are contiguous.

I'm sorry but I think you have not right at this point (or maybe I'm missing something?). The elements in the multidimensional arrays are NOT contiguous, they are contiguous only in one row (or whatever you call it in a 3 or more dimentional array), and the rows can be at any memory locations. For example in the physical memory the second can be in front of the first, the other mixed up. You could not know where will malloc allocate the memory.
My code snippet starts with this (you may check it):
unsigned char** Buffer;
...
void Convert(unsigned char* Vector)
{
...etc.

This means that the Buffer is a bidimensional array. Now how do you create such a vector?
Buffer = (unsigned char**) malloc(240 * sizeof(unsigned char*));
for (i = 0; i < 240; Buffer = (unsigned char*) malloc(320));
...
Buffer - is defined


Your output vector was a simple unsigned char*, you cannot access it with the [][] indices.

unsigned char* out[320 * 240];
... or ...
unsigned char* out;
out = (unsigned char*) malloc(320 * 240);
...
out - is not defined, it will access illegal memory locations.

This is the reason why I said that you cannot decrease significantly anymore the execution time, because you can access linearly the bidimensional array only row-by-row. I think using SSE instruction the code could be even optimized, but from arithmetical operations' speed point of view. Only by rearranging the loop, I don't know if there are more options.
Posted on 2005-06-05 15:05:56 by bszente
I'm sorry, you are correct. For some reason, I remembered your buffer having type unsigned char, which is what I was talking about. I guess you're stuck with bad data structures.
Posted on 2005-06-05 15:30:09 by death
I'm sorry

No problem... ;)

I guess you're stuck with bad data structures.

Definitely :sad:
Posted on 2005-06-05 16:12:06 by bszente