Pretty good points Shawn, you have put some of my thoughts in written form :) :alright:

Like it or not, money is probably the largest driving force in this world, perhaps beaten by religion (which is a bit interesting in this context, too... the "righteous GPL zealots" vs. the "evil capitalist bastard programmers" ;)).

The "standard" way opensource projects are being run is also way too anarchistic, and causes problems like you have pointed out. Linux kernel, MySQL, PHP have rather strict management and financial interests, which is what causes them to 'work'. Somewhat of a paradox, since on the surface GPL seems to be against financial interests and strict control.

I find it a bit sad that so many programmers follow the pied piper and release their code under GPL. Imagine an operating system where everything was fully GPL without any extra clauses - you wouldn't be able to develop closed-source applications, and the software market would come to a grinding halt. We would be stuck with 70s technology and ideologies.

It's also sad that people seem to think that GPL is the only opensource license out there. I wouldn't mind opensourcing commercial software of my own, to some degree. But it would certainly be under a strict license to avoid being ripped off - something along the lines of providing source code to my clients.
Posted on 2004-05-02 15:45:46 by f0dder
As you already said, Genesis doesn't go anywhere. It may be a 3d engine and it may be opensource, but it doesn't produce any games, and certainly not any games that will be able to compete with commercial ones in terms of quality.
The same goes for Blender. Blender is a 3d modeler, and pretty much sums it up. It is in no way able to compete with 3dsmax, Maya, LightWave or other commercial modelers. The difference between Blender and the others is incredibly large (just take a look at the size of Blender and compare it to the others, should give an indication of how well they match up).

As for the linux kernel... It's about as far behind technically as KDE and Gnome are. It just isn't that obvious. But Tanenbaum pointed it out from the beginning. The design is monolithic, outdated and inflexible. The Windows kernel is far better. It's modular, it supports multiple subsystems simultaneously, it can offer near-realtime performance as easily as round-robin services, etc, etc. In fact, the FreeBSD kernel is better than linux aswell, but it gets less attention than linux for some reason.

Finally, writing web applications means you make money with the tools you use to write those applications. The browser is merely a way to deliver it to the customer. You produce, the browser consumes (it's like saying that Microsoft makes money with selling CDs. It's not the CDs, it's the software that's on them that you pay for).
Posted on 2004-05-02 17:02:52 by Scali
Scali is in the progress of writing an article about this, seems pretty interesting... I agree with a lot of stuff in it :) http://bohemiq.scali.eu.org/New%20World%20Order.txt
Posted on 2004-05-02 19:06:38 by f0dder
When you are working for money (food/shelter) you are forced to make the hard choices. In economic terms it's called oportunity cost - every choice that is made eliminates other possible choices. True, this can be done with a strong leader or vision of the goal, but money certainly helps make the choices. If you are not programming for money (and I speak from much personal experience) you can spend as long as you want on any minute detail for any reason what so ever. I've spent weeks on algorithms until they were appealing to me - demonstrating some philosophy (self-similarity / dualism / etc.), utilizing seldom used instructions, satisfying compression requirements, or shaving off the last cycles. But I don't have a goal to finish. ;)
Posted on 2004-05-02 19:49:56 by bitRAKE
I just had a look at blender, since it's featurelist seemed nice. Not that I'm an artist or anything, but it can be fun enough wasting some time. From the featurelist it certainly didn't seem like a fullblown replacement for 3D Studio Max, but it seems respectable enough, fully capable for creating game content, the sample media provided is pretty, and it's free.

So, okay. The UI is probably the slowest I've ever seen. It takes probably 3-4 seconds to repaint if I have a notepad ontop and then switch focus to blender. It's impossible actually using it for something. I dunno what's wrong with my setup - 2.53ghz P4, Radeon 9600xt/256meg, latest catalyst drivers, and a gig of ram... that ought to be enough to use it, right? :rolleyes:

...but I guess that's what you get from a portable GUI, OpenGL, and opensource software. The readme does, however, mention that particularly ATi have been downgrading their 2D support in favour of 3D in their OGL drivers. Shows just how useful OpenGL developing is :rolleyes:
Posted on 2004-05-02 20:45:35 by f0dder
That's a good article and brings up some other interesting points. the main point of the article is that you can't give away sourcecode for free and charge for support, because for most projects (other than Operating Systems) there won't be enough support calls (if you design the user-friendliness well enough). Of course, most OSS projects aren't designed well enough so maybe its a moot point.

In the end, he says, if there is too much GPL then commercial proprietars may lose motivation to produce their own software and thus competitiion and innovation cease. I happen to agree, I can't think of the too much that the OSS community has "innovated" and for the most part, I can't think of how much they will innovate when there is no one left to "immitate" and "overthrow". A lot of people will be out of work and that can't be good on the economy, either.

You want a fair amount of programmers who get paid for a living because they will know all the tricks of the trade. If no one gets paid to program full-time, then they must get paid to do something else full-time, like flip burgers, balance budgets, sell cars, build houses, fly airplanes, clone animals, whatever. At some point, there won't be enough knowledge left to produce any software in the first place.

Another point he makes is that GNU (FSF) was created as an alternative to UNIX because of its strong commercialization. So they created a UNIX-like OS as an alternative. Of course, now the big OS maker is Microsoft. The kernel they have is not the same UNIX kernel. Therefore, they aren't even trying to offer a remotely similar alternative.

Security? Not a point in the article but I'll flaunt my opinion. The minute Linux has even 10% of the desktop market, we'll start to see how insecure it really is. Of course, I've seen buglists in current OSS projects and they don't get press (because it would contradict their only argument against MS). But one day, it will. Then what will they say?

GPL has its place, but it can never support an economy or satisfy a business and their own unique needs.


Thanks,
_Shawn
Posted on 2004-05-02 20:49:54 by _Shawn
Good point about security. Most OSS (well, GPL/linux types anyway) seem hellbent on only using ANSI and POSIX, and sticking to regular C - not utilizing the added security of, say, C++ strings instead of C-style char buffers. No advances in compiler technology, like the (in the context of networked applications) inexpensive buffer checks by, for instance, Microsoft compilers.

Add to this that the original libc string routines are very insecure and easy to misuse, and that even when people seldomly use the safer version (strncpy instead of strcpy, for instance) they tend to misuse those as well. As if this wasn't bad enough, even kernel programmers seem to be confused with regards to the use of signed vs. unsigned numbers - this has led to more than one locate root exploit.

It's true that windows has had some particularly nasty *remote* root exploits, especially because of the DCOM stuff, and that there's still a lot of holes in things like IE. But the GPL/linux community seems to be in denial of the amount of exploits for their software, and if anybody brings it up, they will be told that "Oh, but we fix it in a couple of days and it takes Micro$oft months!!!". I leave it as an excercise for the reader to code a portscanning tool integrated with a couple of wellknown linux root exploits, and count the amount of compromised boxes.

And this is obviously only the tip of the iceberg. There's "probably" a at least a couple of exploits that haven't left the black-hat community.
Posted on 2004-05-03 03:25:35 by f0dder
The thing with MS' patches is that MS sells their products, and therefore has a responsibility to their customers. MS cannot hack a patch together and distribute it without thoroughly testing it. It may cause more trouble than it fixes.
With linux, there is no responsibility, and it happened more than once that a quickly released patch actually didn't work, or while it fixed one bug, it introduced another one, and a followup patch was required (sometimes even more than once).
Is that what we want?
In order to be safe, you'd have to wait until the community tested the patches anyway.

Anyway, the joke is on the linux community. Since they were stressing the security-aspect so strongly, Microsoft decided to attack the problem with everything they have, and Windows' security problems will soon be history. And then what will the linux community stress? Lack of applications? Lack of drivers? Lack of userfriendliness?
Posted on 2004-05-03 03:57:30 by Scali
As for the linux kernel... It's about as far behind technically as KDE and Gnome are. It just isn't that obvious. But Tanenbaum pointed it out from the beginning. The design is monolithic, outdated and inflexible. The Windows kernel is far better. It's modular, it supports multiple subsystems simultaneously, it can offer near-realtime performance as easily as round-robin services, etc, etc. In fact, the FreeBSD kernel is better than linux aswell, but it gets less attention than linux for some reason.


i would be interested to know more about that...

do you have any links that compare the kernels and their performance?

(btw maybe i m wrong, but i dont really believe the kernel of an os and its multitasking architecture make much difference on the speed of an app/the "betterness" of an os... since the task switch will never take more than a tiny fraction of the processing power... but maybe the way input/output and messages and inter process communication are handled ARE important... and ofcourse there is the speed/efficiency of apis, but thats not the kernel...)
Posted on 2004-05-03 12:08:08 by HeLLoWorld
do you have any links that compare the kernels and their performance?


What I mean is not the 'raw performance', but rather the 'performance characteristics'.
In Windows, you can give an application time-critical priority, and then the application can get all CPU time, and will have to release the CPU manually by calling Sleep().
This is not possible in linux afaik, yet it is very important in certain kinds of applications (for example an mp3 player that never skips. I had problems with that back in the Pentium age, when I tried it on FreeBSD, while it worked flawlessly in NT4).
The opposite is not possible either, afaik. An application with idle priority, which is only run when no other tasks are running.
I use UD, which is a distributed client that tries to find a cure for cancer (much like seti@home, only with a useful goal).
In Windows it runs with idle priority, which means that it can utilize all my spare CPU power, without bothering the performance of my PC one bit (it runs when normally the idle thread would be running). I can't tell the difference if it's running or not.
Another thing that is not in linux afaik, is the dynamic scheduling in Windows, which acts on user input, network events etc. When such an event occurs, Windows will temporarily boost the priority of the process that the event is sent to, which means that it will handle the event immediately, which makes the system more responsive.

So in short, the scheduler is really important for the performance of a system. Not because of the time it takes to switch tasks, but at what times it switches tasks, and how. Like I said, I couldn't play back mp3s on FreeBSD without skipping. Not because my CPU was not fast enough, but because the scheduler could not schedule tightly enough for the mp3 player to always receive CPU-time when it was required. This meant that if I was multitasking a lot, my mp3 would skip, even though I still had spare CPU resources.

That's the problem, linux and FreeBSD are mostly aimed at server duties, but server duties don't have very complex scheduling requirements. Usually it is enough to schedule all services round-robin. An interactive multitasking desktop environment is a far more complex matter, and as far as I know, linux doesn't quite cut it yet. The GUI feels unresponsive, and I doubt that I could run something like seti@home without noticing it in the performance of my PC.

Other than that you are right though... If there is only a single app running (well actually only one thread), then it shouldn't matter on which OS it runs, because the hardware determines the performance. But when you are running multiple threads or applications, there can be a difference in the way the kernel allows you to run them. On the whole, you may get the same amount of work done in the same amount of time, but often it is more important when which part of the work is done, like eg the mp3 that should not be skipping, or the application that handles the event as soon as it comes in.
Posted on 2004-05-03 13:06:00 by Scali
thank you very much.

btw a teacher of mine once said that at a time, the the win kernel was faster than the linux one and that after further investigations, it was found out that it was because the winkernel did all task switches round robin everytime without caring, while the linux kernel did complex pririty calculations to determine who s next, and this was eating some significant processing power, so they finally reverted to simple round robin.

have you heard of anything like that? (that would imply the win kernel has changed a lot since then, and that the linux kernel once made complex scheduling and now no more...)

(but this teacher also once said that if win was unstable over time, it was because it didnt free some mem used by apps when they terminate, while linux carefully does it, and said it whithout any proof or example, so maybe he was talking of something he vaguely heard and didnt know exactly ... to say the least :) )
Posted on 2004-05-03 13:23:35 by HeLLoWorld
btw this info on win kernel/scheduler is available and documented? in msdn for example? or is it something you ve learnt from unofficial doc or professionnal experience? tings like that may change from one windows to another (version), isnt it??
Posted on 2004-05-03 13:25:37 by HeLLoWorld
btw a teacher of mine once said that at a time, the the win kernel was faster than the linux one and that after further investigations, it was found out that it was because the winkernel did all task switches round robin everytime without caring, while the linux kernel did complex pririty calculations to determine who s next, and this was eating some significant processing power, so they finally reverted to simple round robin.


Unless he's talking about a really old version of Windows (pre-NT4/2k/XP), that's complete rubbish. Windows has a very advanced scheduler (you can even select to favour server or desktop-style scheduling). If anything, it would be the other way around (I wouldn't be surprised if early linux kernels did vanilla round-robin scheduling).

The details of Windows' task scheduling (among others) were explained by a spokesman from MS at the presentation of Windows XP at my university. You can also find this information in MSDN, and probably also in various books ("Inside Windows 2000" probably). So it's well-documented I suppose. The linux scheduler isn't documented very well, but you can just check its sourcecode to see what it can and cannot do.
Posted on 2004-05-03 13:39:45 by Scali
The Scali's article is very good. But I have two considerations to point out:

1. We have here succesful examples of open source under our nose (one example among others):
RadAsm with addins and custom controls, that (imho) beats all commercial counterparts in its field. And hopefully will continue doing so in the future. Yes, I agree, it is mainly the result of the great effort of a single individual but the code is there. Everyone can learn and contribute.

2. If you have a commercial software company and develop a good/innovative product that collides with MS interests, you are at risk to be put out of business because your software will pass through Windows. Alternatives are: you sell your product and MS make the big money from it. You do a partnership with MS (as Autodesk or Intergraph) and you share your earnings with them while you are the little partner.
GPL have permitted to many new companies to grow and survive under actual conditions. You cannot neglect the money done by (and not only by) RedHat, SuSe, Mandrake, ...
Posted on 2004-05-03 14:22:29 by pelaillo
Is RadAsm GPL though? The article focuses on GPL, not opensource in general. And does the author of RadAsm support his family with RadAsm? The article doesn't deny that you can write software and release it under GPL. It points out that it is hard to make money if you do so.

And RedHat, SuSe, Mandrake etc make money from support, mostly. None of them are very healthy by the way. Mandrake nearly went bankrupt not too long ago, and SuSe was bought by Novell recently, if I'm not mistaken.
And still, as the article points out, this is an exception where GPL could work. It doesn't go for most other software.

Also, your paranoia against MS seems unfounded.
Posted on 2004-05-03 14:35:19 by Scali
The linux UI un-responsiveness isn't just because of the scheduler, I did a gentoo install optimized for the system etc yadda yadda with "multimedia" scheduler patches or whatever, and it still felt laggy/spongy. I suppose it's because the XFree86 implementation of the X11 protocol sucks a bit, and that window managers (well, at least KDE) is suckily coded. Put more than 4 controls in a window, and resize is no longer a joy. Sure, this could probably be fixed by using one of the minimalistic window managers... but then why would I use one such?

On the positive side, the 2D hardware acceleration actually worked okay, and that even though it was an onboard intel 845G graphics chipset. I guess the 2D performance was at about windows level, at least for simple stuff like moving the window around. That impressed me a lot, previously generic VESA2 DOS code beat the voodoo3 "optimized" XF86 drivers easily ;)

Didn't change the fact that a gentoo tuned for the P4 1.7ghz system it ran un performed worse than a generic WinXP on a 700mhz athlon. Graphics performance, loadtime of applications (ugh!), responsiveness - of course not to mention userfriendlyness. I think gentoo had a total boottime that was some 10 seconds shorter than XP. Then again, that's probably to expect with a 1ghz faster CPU ;)
Posted on 2004-05-03 15:06:44 by f0dder
Is RadAsm GPL though? The article focuses on GPL, not opensource in general.


The thread is focused on Open Source (see the title)

I want to point out that open source is not the same as GPL, so we are talking about the same thing.

Mandrake is out of bankrupcy and this is a good sign. However, I am not defending GPL, only balancing things. There are people that get their work thanks to the GPL.

It is not paranoia, it is recent history. The rules of the market.
Do you remeber WordPerfect, Lotus 123, AutoCad 12 for DOS?
Do you remember their first versions for Windows?
All the responsibility for their fall is to be given by the incapacity to innovate of other times' big software companies? Why only AutoCad rest (despite it is not the best CAD, if not the worst from the user's point of view)?
Posted on 2004-05-03 15:16:16 by pelaillo

and SuSe was bought by Novell recently, if I'm not mistaken.


Do you remember the price? :grin:
Posted on 2004-05-03 15:28:56 by pelaillo
The thread is focused on Open Source (see the title)


Yes, but since you mentioned the article, and RadASM directly after it, I thought you were referring to the article, not the thread itself.

It is not paranoia, it is recent history. The rules of the market.
Do you remeber WordPerfect, Lotus 123, AutoCad 12 for DOS?
Do you remember their first versions for Windows?
All the responsibility for their fall is to be given by the incapacity to innovate of other times' big software companies? Why only AutoCad rest (despite it is not the best CAD, if not the worst from the user's point of view)?


I have no idea what you're trying to say.
Posted on 2004-05-03 16:24:51 by Scali
The linux UI un-responsiveness isn't just because of the scheduler, I did a gentoo install optimized for the system etc yadda yadda with "multimedia" scheduler patches or whatever, and it still felt laggy/spongy.


Could be because the scheduler doesn't actually cooperate with the X11/KDE subsystems, like Windows does? From what I understood, the new linux 2.6 scheduler uses heuristics rather than actual interaction with the GUI.
Posted on 2004-05-03 16:26:36 by Scali