For example Windows XP compiled explicitly for P3, for P4 etc. I believe we would realy feel the performance boost.
I did a few tests... a P4 celeron 1.7GHz gentoo was beaten by a 700MHz Athlon running WinXP... XP is generic code. So... whatever ;)
I don't remember all the details about gcc4.0 but it is slower than previous versions. The authors wanted to get it out with certain features for some reason but didn't have time to optimize thoroughly. Supposedly the speed will come back as minor upgrades occur.
Hi guys!
I'm sorry, but yesterday I was busy so I could not respond to your posts.
Jibz: I was talking of the EXE file size. If I start to debug a VC exe file, there is a lot of startup code. Hear comes Vortex's advice to eliminate those "unnecessary' stuffs. (Thanks Vortex for the link - however it seems, that I cannot download those files - filesize is 0). It might be true that the *real* generated code's size is smaller than those of MinGW's (at least for the generated code for the below procedure is true).
I made a benchmark, so let's talk with real numbers: here is the C source code of a procedure, which converts a color picture to grayscale:
Vector - is a byte vector in BGRBGRBGR... format, and it starts with the last row. This is how I get the frame from the webcam. So let's consider this example. I know that this is not an optimal format, it's not 32 bit aligned, etc. CONSIDER THIS AS A GIVEN SITUATION.
As you can see I made a loop unrolling to optimize the preloading of the instructions, so it is a loop which can be well optimized by a compilator. I made a little program, which calls this procedure 10000 times. Let's see the results:
Compiler | Time (ms) | Compiler options
========================|===========|==================
Visual C++ Toolkit 2003 | 55265 | /GA /arch:SSE2 /arch:SSE /G7 /Ot /O2 /Og
| 55187 | same
-- ----------------------|-----------|------------------
| 34141 | -march=i586 -O2
MinGW 3.2.0-rc3 | 34595 | same
GCC 3.4.2 |-----------|------------------
| 23656 | -march=pentium4 -O2
| 22296 | same
========================|===========|==================
Quite interesting results, aren't they? And I was generous with MS VC++ Toolkit in the optimization flags. GCC produce significant better results even if it compiles ony for a Pentium I. So I would not be so sure with the Microsoft compilers. Those from GCC are knowing their job. You can see that GCC is REALY using SSE instructions in contrary to VC, which does not use, even if I specified to use it.
And there are much more optimizing options for GCC, to improve further the code.
I attached the two assembly listings (they are two big to list here...). You can see that GCC loads those float constants into FPU before entering the loop. It's trivial, VCToolkit does not. GCC uses SSE instructions to move data, VCToolkit does not.
And this was only a quite simple algorithm.
f0dder: it's realy strange what you are saying. It was beaten in which manner? GUI responsiveness? Or what? Test the same program under XP and Gentoo, and test also your HDD performance, and you will see, which one is faster. I'm sure, that if you configure your Gentoo properly it's no way to reach it in performance by XP (exept the GUI). I admit that GTK2 is not so fast like XP GUI, but in any other things it hardly can be beaten (I'm talking about OpenGL, image processing, the Eagle PCB software, Xilinx, etc.).
PS.: Hey guys, I'm not having in mind to do anti-M$ campain here, don't get me wrong. I just want to speak in an objective manner about these topics.
I'm sorry, but yesterday I was busy so I could not respond to your posts.
Jibz: I was talking of the EXE file size. If I start to debug a VC exe file, there is a lot of startup code. Hear comes Vortex's advice to eliminate those "unnecessary' stuffs. (Thanks Vortex for the link - however it seems, that I cannot download those files - filesize is 0). It might be true that the *real* generated code's size is smaller than those of MinGW's (at least for the generated code for the below procedure is true).
I made a benchmark, so let's talk with real numbers: here is the C source code of a procedure, which converts a color picture to grayscale:
unsigned char** Buffer;
...
void Convert(unsigned char* Vector)
{
unsigned char* Data = Vector;
float s, t;
for (int j = 239; j >= 0; j--)
for (int i = 0; i < 320;)
{
s = 0;
s += (*Data++) * 0.098;
s += (*Data++) * 0.504;
s += (*Data++) * 0.257;
Buffer = 16 + s;
t = 0;
t += (*Data++) * 0.098;
t += (*Data++) * 0.504;
t += (*Data++) * 0.257;
Buffer = 16 + t;
s = 0;
s += (*Data++) * 0.098;
s += (*Data++) * 0.504;
s += (*Data++) * 0.257;
Buffer = 16 + s;
t = 0;
t += (*Data++) * 0.098;
t += (*Data++) * 0.504;
t += (*Data++) * 0.257;
Buffer = 16 + t;
s = 0;
s += (*Data++) * 0.098;
s += (*Data++) * 0.504;
s += (*Data++) * 0.257;
Buffer = 16 + s;
t = 0;
t += (*Data++) * 0.098;
t += (*Data++) * 0.504;
t += (*Data++) * 0.257;
Buffer = 16 + t;
s = 0;
s += (*Data++) * 0.098;
s += (*Data++) * 0.504;
s += (*Data++) * 0.257;
Buffer = 16 + s;
t = 0;
t += (*Data++) * 0.098;
t += (*Data++) * 0.504;
t += (*Data++) * 0.257;
Buffer = 16 + t;
}
}
Vector - is a byte vector in BGRBGRBGR... format, and it starts with the last row. This is how I get the frame from the webcam. So let's consider this example. I know that this is not an optimal format, it's not 32 bit aligned, etc. CONSIDER THIS AS A GIVEN SITUATION.
As you can see I made a loop unrolling to optimize the preloading of the instructions, so it is a loop which can be well optimized by a compilator. I made a little program, which calls this procedure 10000 times. Let's see the results:
Compiler | Time (ms) | Compiler options
========================|===========|==================
Visual C++ Toolkit 2003 | 55265 | /GA /arch:SSE2 /arch:SSE /G7 /Ot /O2 /Og
| 55187 | same
-- ----------------------|-----------|------------------
| 34141 | -march=i586 -O2
MinGW 3.2.0-rc3 | 34595 | same
GCC 3.4.2 |-----------|------------------
| 23656 | -march=pentium4 -O2
| 22296 | same
========================|===========|==================
Quite interesting results, aren't they? And I was generous with MS VC++ Toolkit in the optimization flags. GCC produce significant better results even if it compiles ony for a Pentium I. So I would not be so sure with the Microsoft compilers. Those from GCC are knowing their job. You can see that GCC is REALY using SSE instructions in contrary to VC, which does not use, even if I specified to use it.
And there are much more optimizing options for GCC, to improve further the code.
I attached the two assembly listings (they are two big to list here...). You can see that GCC loads those float constants into FPU before entering the loop. It's trivial, VCToolkit does not. GCC uses SSE instructions to move data, VCToolkit does not.
And this was only a quite simple algorithm.
f0dder: it's realy strange what you are saying. It was beaten in which manner? GUI responsiveness? Or what? Test the same program under XP and Gentoo, and test also your HDD performance, and you will see, which one is faster. I'm sure, that if you configure your Gentoo properly it's no way to reach it in performance by XP (exept the GUI). I admit that GTK2 is not so fast like XP GUI, but in any other things it hardly can be beaten (I'm talking about OpenGL, image processing, the Eagle PCB software, Xilinx, etc.).
PS.: Hey guys, I'm not having in mind to do anti-M$ campain here, don't get me wrong. I just want to speak in an objective manner about these topics.
I made a few test without loop unrolling, to prove that loop unrolling is realy usefull (at modern processors). So I let only the 1st convertion from that 8.
The results:
Compiler | Time (ms) | Compiler options
========================|===========|==================
Visual C++ Toolkit 2003 | 55687 | /GA /arch:SSE2 /arch:SSE /G7 /Ot /O2 /Og
| 55718 | same
------------------------|-----------|------------------
MinGW 3.2.0-rc3 | 26375 | -march=pentium4 -O2
GCC 3.4.2 | 26016 | same
========================|===========|==================
VC: almost no difference -> conclusion, VC toolkit with those optimization flags does not look ahead in the code. This is a little bit ridiculous.
GCC: almost 3 seconds of difference.
I made another test: I put the /Ox maximum optimizations for VCToolkit. The results WITH loop unrolling: 53812 and 53890.
Less than 2 seconds difference. It seems that VCToolkit does a little bit of look ahead only with the /Ox maximum optimization flag.
But definitely does not even approximate GCC's performance (at least in this case). And as I mentioned there are a lot more extra options to use with GCC to impove more those results.
BTW, I forgot to mention that I run the test on my computer: P4 2AGHz, Intel D845GBV mainboard, 512MB DDR 266MHz.
Comments are welcome, if I forgot something or if I'm not right than please correct me (the last thing I want is to spread stupid things :lol: )
The results:
Compiler | Time (ms) | Compiler options
========================|===========|==================
Visual C++ Toolkit 2003 | 55687 | /GA /arch:SSE2 /arch:SSE /G7 /Ot /O2 /Og
| 55718 | same
------------------------|-----------|------------------
MinGW 3.2.0-rc3 | 26375 | -march=pentium4 -O2
GCC 3.4.2 | 26016 | same
========================|===========|==================
VC: almost no difference -> conclusion, VC toolkit with those optimization flags does not look ahead in the code. This is a little bit ridiculous.
GCC: almost 3 seconds of difference.
I made another test: I put the /Ox maximum optimizations for VCToolkit. The results WITH loop unrolling: 53812 and 53890.
Less than 2 seconds difference. It seems that VCToolkit does a little bit of look ahead only with the /Ox maximum optimization flag.
But definitely does not even approximate GCC's performance (at least in this case). And as I mentioned there are a lot more extra options to use with GCC to impove more those results.
BTW, I forgot to mention that I run the test on my computer: P4 2AGHz, Intel D845GBV mainboard, 512MB DDR 266MHz.
Comments are welcome, if I forgot something or if I'm not right than please correct me (the last thing I want is to spread stupid things :lol: )
Hi bszente,
Here is all the examples in one package.
Here is all the examples in one package.
Hello bszente, try this:
About 8.8 secs on a P4 2.4MHz with some apps running. Compiled using:
What are the results from MinGW?
void convert(const unsigned char *in, unsigned char *out)
{
unsigned char *out_end = out + 320 * 240;
while (out != out_end) {
*out++ = in[0] * 0.098 +
in[1] * 0.504 +
in[2] * 0.257 +
16;
in += 3;
}
}
unsigned char input[320 * 240 * 3];
unsigned char output[320 * 240];
int main(void)
{
int i;
for (i = 0; i < 10000; i++)
convert(input, output);
return 0;
}
About 8.8 secs on a P4 2.4MHz with some apps running. Compiled using:
cl /O2 /Ox /GL /G7 /QIfist test.c
What are the results from MinGW?
Thanks Vortex for the archive. Is something wrong with the forum? I still got empty zip file when I download it. After the download filesize is 0. However I am able to download zip files correctly from other webpages. I'm using Firefox.
death, this is getting more-and-more interesting... I found the trick of VCToolkit: the /QIfist option does the trick, not the optimization levels. Did you try to run your program without that compilation option? You will get more than 30 seconds. I see you changed the code. You simply put out the result in a similar linear array, not a bidimentional one.
Ok. Here are the results on my computer:
VisualC++ Toolkit: 10.8 secs with the same options that you have used
MinGW: between 10.7-10.8 with "-march=pentium4 -O3"
Now the interesting part: if I compile the procedure from my previous post (the original one) with VCToolkit using the /QIfist option, than the execution time drops down to 12 secs. It's quite impressive. I wonder how is this happening (I know that /QIfist eliminates a call after each float to int conversion), and if MinGW can be made to achieve this performance.
death, this is getting more-and-more interesting... I found the trick of VCToolkit: the /QIfist option does the trick, not the optimization levels. Did you try to run your program without that compilation option? You will get more than 30 seconds. I see you changed the code. You simply put out the result in a similar linear array, not a bidimentional one.
Ok. Here are the results on my computer:
VisualC++ Toolkit: 10.8 secs with the same options that you have used
MinGW: between 10.7-10.8 with "-march=pentium4 -O3"
Now the interesting part: if I compile the procedure from my previous post (the original one) with VCToolkit using the /QIfist option, than the execution time drops down to 12 secs. It's quite impressive. I wonder how is this happening (I know that /QIfist eliminates a call after each float to int conversion), and if MinGW can be made to achieve this performance.
I changed the procedure to this:
Results:
VisualC++ Toolkit: ~10.6 secs with "/O2 /Ox /GL /G7 /QIfist" (maximum optimization); without /QIfist the result is 53 secs
MinGW: ~11.8 with only "-O3 -march=pentium4", this is far from the maximum optimization, so it is possible to improve this.
It's very interestig how the performance is varying with the modification of the code.
unsigned char** Buffer;
...
void Convert(unsigned char* Vector)
{
for (int j = 239; j >= 0; j--)
for (int i = 0; i < 320; i += 8)
{
Buffer = Vector[0] * 0.098 + Vector[1] * 0.504 + Vector[2] * 0.257 + 16;
Buffer = Vector[3] * 0.098 + Vector[4] * 0.504 + Vector[5] * 0.257 + 16;
Buffer = Vector[6] * 0.098 + Vector[7] * 0.504 + Vector[8] * 0.257 + 16;
Buffer = Vector[9] * 0.098 + Vector[10] * 0.504 + Vector[11] * 0.257 + 16;
Buffer = Vector[12] * 0.098 + Vector[13] * 0.504 + Vector[14] * 0.257 + 16;
Buffer = Vector[15] * 0.098 + Vector[16] * 0.504 + Vector[17] * 0.257 + 16;
Buffer = Vector[18] * 0.098 + Vector[19] * 0.504 + Vector[20] * 0.257 + 16;
Buffer = Vector[21] * 0.098 + Vector[22] * 0.504 + Vector[23] * 0.257 + 16;
Vector += 24;
}
}
Results:
VisualC++ Toolkit: ~10.6 secs with "/O2 /Ox /GL /G7 /QIfist" (maximum optimization); without /QIfist the result is 53 secs
MinGW: ~11.8 with only "-O3 -march=pentium4", this is far from the maximum optimization, so it is possible to improve this.
It's very interestig how the performance is varying with the modification of the code.
hi bszente,
I checked the zip attachment in my post and I was able to download it without problem.
Four your information, I emailed you the example package.
I checked the zip attachment in my post and I was able to download it without problem.
Four your information, I emailed you the example package.
Thanks Vortex.
So guys, any opinion/comment related to these performance evaluations? Or any personal experiences, something...
My conclusion is that both compilers are good, quite the same. So both can be used ;)
MinGW is free and constantly improving, and has a native debugger (this is important).
The free Visual C++ Toolkit is not guaranteed to be upgraded, and it's harder to find a free compatible debugger (if I'm not wrong).
So guys, any opinion/comment related to these performance evaluations? Or any personal experiences, something...
My conclusion is that both compilers are good, quite the same. So both can be used ;)
MinGW is free and constantly improving, and has a native debugger (this is important).
The free Visual C++ Toolkit is not guaranteed to be upgraded, and it's harder to find a free compatible debugger (if I'm not wrong).
I wanted to note (for those who don't know already) that MinGW Studio IDE and Relo IDE both serve as good front ends for the MinGW compiler. Unlike Dev-C++, when Win32 API functions are used these two IDEs pop-up a contextual window to assist in remembering the various parameters the function uses. Pressing F1 when the cursor is on a function will display that function in a Win32.hlp file. Relo can also serve as a front-end to Borland, Digital Mars and Visual C++.
it's realy strange what you are saying. It was beaten in which manner? GUI responsiveness? Or what? Test the same program under XP and Gentoo, and test also your HDD performance, and you will see, which one is faster. I'm sure, that if you configure your Gentoo properly it's no way to reach it in performance by XP (exept the GUI).
GUI responsiveness, application loading time, etc. Same speed harddrives, DMA enabled. I even staticalling linked the shared libraries on gentoo, which sped up things a bit... but it was still significantly slower, apps grew a lot on size, and of course the advantage of shared libraries was gone.
And can hardly be beaten in OpenGL? heh. What cards are supported under linux, basically NVidia, ATi, and VIA unichrome?
It's quite impressive. I wonder how is this happening (I know that /QIfist eliminates a call after each float to int conversion), and if MinGW can be made to achieve this performance.
VC by defalt does the ftol rather than fist, to be safe; it cannot assume you're not changing the precision flags etc. in other parts of the code. Changing the control word is expensive.
You have OllyDebug as well as WinDbg debuggers... WinDbg is very powerful. I've never really liked GDB (I guess there are IDEs that integrate GDB, that would help.) http://www.codeblocks.org/ looks promising.
When I've done tests (note again: GCC 3.x series), GCC was almost always beaten by VC 2003, except for a few rare occasions. Sometimes the intel compiler was better than VC, sometimes a (tiny) bit slower. It all depends on the type of code, and how you write it; some kinds of "help" (loop unrolling and pointer magic) works better in some compilers than others. Some of the really big advantages, though, can't be seen in short examples like looking at a single algo; ICC and VC "link-time code generation" comes to mind.
GUI responsiveness, application loading time, etc. Same speed harddrives, DMA enabled. I even staticalling linked the shared libraries on gentoo, which sped up things a bit... but it was still significantly slower, apps grew a lot on size, and of course the advantage of shared libraries was gone.
Well I don't know what to say, but I strongly have the impression that you might did something wrong with your install. I have under Windows every driver installed and properly configured, but it does not reach from far Gentoo's performace in applications(Xilinx ISE, Eagle, etc. and even OpenGL games), not to mention the memory consumption. Performance and especially reliability are those who urged me to switch from Windows. I've learnt my lesson, and I don't wan't to do the same mistake again. I'm keeping me with windows up-to-date only because of the market. If you go to work at a company, almost sure that they are working under windows.
And can hardly be beaten in OpenGL? heh. What cards are supported under linux, basically NVidia, ATi, and VIA unichrome?
Well I'm talking about OpenGL performance and quality on the same hardware, NOT driver support. Please don't confuse it. It's not the same. You have absolutely right that under Windows the hardware support is better, it's out of question. Everybody knows that. But most of the serious cards have good linux support.
Anyway, I consider useless to continue this debate ;) on this forum. The target of the forum is Win32Asm not Linux :)
http://www.codeblocks.org/ looks promising.
It's realy a good IDE, I'm using it a lot. Autocomplete feature needs improvment, but I'm sure it will be done in the next months.
Some of the really big advantages, though, can't be seen in short examples like looking at a single algo
You have absolutely right. But taking into account that VC is a product of a company with thousands of developers in behind, and GCC is a free product you must admit that GCC is a great achievement with (almost) same performance as MS's.
and even OpenGL games), not to mention the memory consumption
I hope you're not using the taskmgr memory tab to judge memory consumption, at least not without knowing what these figures actually mean... as for OpenGL, well whatever. Less cards supported, and DirectX generally seems to have better performance anyway.
Performance and especially reliability are those who urged me to switch from Windows. I've learnt my lesson, and I don't wan't to do the same mistake again.
*shrug* - for workstations, I see no reason to go linux. I have 14-day uptimes without any problems, with constant network activity, some gaming, coding, DVD transcoding, etc... downtimes have been because of hardware fiddling or because I wanted to sleep without noise. Even for servers, I've seen win2k boxes with more than a year of uptime.
Anyway, I consider useless to continue this debate
It's certainly not useless, debates are always good as long as they are not polemic. With linux vs. windows it can be a bit hard sometimes, as some of the parameters aren't very quantifiable ("perception" issues), but I try to remain reasonably objective (even if windows biased). Yes, I *do* have linux boxes running here, and I have some experience with it.
http://www.codeblocks.org/ looks promising.
It's realy a good IDE, I'm using it a lot. Autocomplete feature needs improvment, but I'm sure it will be done in the next months.
Well, it's not even beta yet, is it? :)
You have absolutely right. But taking into account that VC is a product of a company with thousands of developers in behind, and GCC is a free product you must admit that GCC is a great achievement with (almost) same performance as MS's.
Indeed, no question about this - but I don't settle for second-best. GCC is 3rd-best at most, and when VC2003 is free... well ;)
It's certainly not useless, debates are always good as long as they are not polemic. With linux vs. windows it can be a bit hard sometimes, as some of the parameters aren't very quantifiable ("perception" issues), but I try to remain reasonably objective (even if windows biased).
:) Well it's ok than. I'm glad that you think that way. Somebody have to support Windows too ;)
You know, I was a fanatic Windows user too, until I switch to Linux. Here is why... Maybe you have read it already.
I hope you're not using the taskmgr memory tab to judge memory consumption
:lol: :lol: Actually I was speaking about the programs under Windows vs. programs under Linux. Linux programs use less memory. The same advice: try to run the same app under Windows and Linux, you will find out.
DirectX generally seems to have better performance anyway.
I'm sorry, but I cannot agree with you in this question. I wonder why game developers are not on the same opinion like you. All the major and well written games are based on OpenGL (UT2004, Quake, HalfLife, etc.) I'm sure if Direct3D would yield better performance, they would use it (not to mention that it's easyer to program). Only those EA and other games work only with Direct3D (those games, which cannot run even on a 3GHz processzor with nVidia i-dont-know-how-much thousand). Anyway, this is only my modest opinion, I'm not game expert after all.
BTW, why is 3D Studio MAX, AutoCAD and other CAD programs using OpenGL? They are exclusively Windows apps, so they could use Direct3D without any problem. Again... why is OpenGL used, where performance is needed? And definitely there is also quality difference (think to those games, who had both engines).
*shrug* - for workstations, I see no reason to go linux. Even for servers, I've seen win2k boxes with more than a year of uptime.
You're a lucky guy. The exception stands beside the rule. :lol:
Well, it's not even beta yet, is it? :)
Well actually it is a beta, and it is rapidly evolving. Compared to the other IDEs the workspace handling is realy great.
Indeed, no question about this - but I don't settle for second-best. GCC is 3rd-best at most, and when VC2003 is free... well ;)
Yes, but VC2003 is not guaranteed to have newer versions for free. OK, hardly, but I agree to put GCC at the 3th place, at least for now. ;) In the summer I will have some image processing project, then I will make a deeper study. This thread realy made me curious. I will return to this subject.
You know, I was a fanatic Windows user too
I'm not fanatic, I admit when linux have done things better - which just isn't very often.
I hope you're not using the taskmgr memory tab to judge memory consumption
Actually I was speaking about the programs under Windows vs. programs under Linux. Linux programs use less memory. The same advice: try to run the same app under Windows and Linux, you will find out.
Again, I hope you weren't using task manager to judge the memory consumption of the windows applications.
I wonder why game developers are not on the same opinion like you. All the major and well written games are based on OpenGL (UT2004, Quake, HalfLife, etc.)
Quake (and other ID games) use OpenGL because Carmack is a zealot... however, doom 3 has *very* lousy performance. I even think Carmack has stated that his next game will probably be Direct3D (not 100% on that one, though). Half Life supports both D3D and OpenGL, and Half Life 2 which can even run resonably okay on a PII with a radeon 8500 is D3D exclusively. So is World Of Warcraft, which again runs on pretty modest equipment. And Dugeon Siege, which runs okay on an Athlon700 with a GeForce 2 card (heck, even runs on integrated intel extreme graphics 2). I've played a number of games, and my brothers play even more - while D3D games sucked a number of years ago, the API has matured a lot in the recent years - and it seems more powerful and flexible (and certainly faster to adapt new features!) than OpenGL.
BTW, why is 3D Studio MAX, AutoCAD and other CAD programs using OpenGL?
Both 3DS max and Maya support Direct3D, and 3DS has done so for many years now. First version were a bit buggy, partially because of the previous d3d versions, partially because of bad implementations in the pograms. The reason many engineering apps use OpenGL? It used to be the de-facto standard. A lot are adapting d3d now, though - I wonder why? (perhaps because nvidia are the only ones to have good GL support, apart from the really high-end cards. Try running blender 3d with ATi catalyst drivers...)
And definitely there is also quality difference (think to those games, who had both engines).
Yep, direct3d games used to look pretty bland and dull. Don't know why though, have a look at halflife 2, dungeon siege and world of warcraft... and don't judge multi-engine games, they are often not very well designed.
*shrug* - for workstations, I see no reason to go linux. Even for servers, I've seen win2k boxes with more than a year of uptime.
You're a lucky guy. The exception stands beside the rule. Laughing
Again, *shrug*. All the NT boxes I've used have been stable (as long as the hardware has been stable and I haven't been dealing with driver development). That's four different boxes at my home, and 5-6 boxes at the musem where I work. Not to mention my friends boxes. I guess it's around 25+ boxes in total, where at least 5-6 of them (mine and 'geek' friends' workstations) have had 7+ day uptimes without problems. So I'm not too sure I'm the exception :)
Yes, but VC2003 is not guaranteed to have newer versions for free.
True... but some things are worth paying for. Visual Studio *is* very expensive, though. And it is nice that GCC *is* progressing... but so is VC and ICC.
PS: it would be very hard to bring quantifiable numbers to support the d3d vs. gl debate, since I haven't done much 3d programming myself. The APIs require different programming methods, and performance varies vastly between graphics cards and drivers. I've used both nvidia and ati hardware, as well as intel onboard graphics, and it has been my general impression that with recent titles and programmers that know what they're doing, DirectX provides better performance and less hassle.
I agree with f0dder on all those topics.
My win2k box has uptimes of a month (just because once a month I open the case to clean-up).
VisualC++ 2003 is also sold separately, for $109 here, btw. Contains the IDE, and probably an SDK-limited MSDN (just what one needs).
Probably a 2005 version will come out soon.
My win2k box has uptimes of a month (just because once a month I open the case to clean-up).
VisualC++ 2003 is also sold separately, for $109 here, btw. Contains the IDE, and probably an SDK-limited MSDN (just what one needs).
Probably a 2005 version will come out soon.
What is wrong with the gcc package is that the command line tools have very comlex switches & parameters.
Vortex: yeah, that's the main problem for the users. It's difficult to look after of those command line switches, and that's why many people is avoiding linux. You have to do what are you doing, and it takes much more time to accomodate with it. And today's tendency is the fastest-time to market, no matter how, so there is no time to learn command line arguments :)
Well, I'm out of arguments at this point regarding OpenGL and C compilers :) But of this C compiler stuff I will return to this subject after the summer. It realy made me curious.
It's so strange, because usually the serious research projects are runnign primarily under Linux. Yann LeCun's Lush and LeNet; FFTW; Singular; etc. There must be a reason for this (which I don't know), and I don't think the money is the reason.
There is one more question to ask from you guys... Neither both of you did not mentioned anything about my post, in which I explain why I moved to Linux. There is a strong point there, when I try to explain the use of NOT microsoft made production softwares. AutoCAD, ModelSim, Xilinx, etc. (I'm familiar with these ones). Or even MS Office.
You guys are software engineers, mainly you deal with Compilers, IDEs, Debuggers, everything which is usually related to IT. That's a completely different world compared to the production softwares. If you will start to work with such products, you will find out what windows is able to. Crash after crash also on expensive good, reliable hardware. And you cannot deal this with saying those products are badly written. I'm supporting AutoCAD and other softwares for my fater for 10 years. He is not IT specialist, so he is not using IT porgrams, he is a lets say normal, but he is doing this architectural business on a professonal level, since i386 and MS-DOS era. Reliability is a distant feature in this field under Windows.
I'm not considering myself a software engineer, I'm doing programming as a passion, hobby. I use mainly electrical engeneering tools, but I have the luck, that almost every tools I use works under Linux too.
I run out of arguments, I'm meaning that I don't have the knowledge to go further in this debate. I certainly will find out new things in the near future, and I will share with you guys.
It's a very difficult topic, this compiler comparison. I have found the following links:
- Willus.com's 2002 Win32 Compiler Benchmarks, the guy is not fair, /O6 /Ox for Microsoft is NOT equal with -O2 -march=i686, it's equal with at least with -03 -march=i686 and -mtune=pentium3 and even this way in the LAME encoding GCC is faster
- Linux C and C++ Compilers and some comment on it Re: Comparing Linux C and C++ Compilers: Benchmarks and Analysis
- Comparing C/C++ Compilers, you have to register to see it. The alghorithms used to test the speed are not so good (I don't consider them as good reference).
Well, I'm out of arguments at this point regarding OpenGL and C compilers :) But of this C compiler stuff I will return to this subject after the summer. It realy made me curious.
It's so strange, because usually the serious research projects are runnign primarily under Linux. Yann LeCun's Lush and LeNet; FFTW; Singular; etc. There must be a reason for this (which I don't know), and I don't think the money is the reason.
There is one more question to ask from you guys... Neither both of you did not mentioned anything about my post, in which I explain why I moved to Linux. There is a strong point there, when I try to explain the use of NOT microsoft made production softwares. AutoCAD, ModelSim, Xilinx, etc. (I'm familiar with these ones). Or even MS Office.
You guys are software engineers, mainly you deal with Compilers, IDEs, Debuggers, everything which is usually related to IT. That's a completely different world compared to the production softwares. If you will start to work with such products, you will find out what windows is able to. Crash after crash also on expensive good, reliable hardware. And you cannot deal this with saying those products are badly written. I'm supporting AutoCAD and other softwares for my fater for 10 years. He is not IT specialist, so he is not using IT porgrams, he is a lets say normal, but he is doing this architectural business on a professonal level, since i386 and MS-DOS era. Reliability is a distant feature in this field under Windows.
I'm not considering myself a software engineer, I'm doing programming as a passion, hobby. I use mainly electrical engeneering tools, but I have the luck, that almost every tools I use works under Linux too.
I run out of arguments, I'm meaning that I don't have the knowledge to go further in this debate. I certainly will find out new things in the near future, and I will share with you guys.
It's a very difficult topic, this compiler comparison. I have found the following links:
- Willus.com's 2002 Win32 Compiler Benchmarks, the guy is not fair, /O6 /Ox for Microsoft is NOT equal with -O2 -march=i686, it's equal with at least with -03 -march=i686 and -mtune=pentium3 and even this way in the LAME encoding GCC is faster
- Linux C and C++ Compilers and some comment on it Re: Comparing Linux C and C++ Compilers: Benchmarks and Analysis
- Comparing C/C++ Compilers, you have to register to see it. The alghorithms used to test the speed are not so good (I don't consider them as good reference).
Hi bszente,
The developers of Linux should try to simplify those complicated commandline switches. The syntax of VC compiler switches is not so horrible like that of gcc.
The developers of Linux should try to simplify those complicated commandline switches. The syntax of VC compiler switches is not so horrible like that of gcc.