Yet another "innovation" of M$ :grin: Have they hear that Gecko have had several years doing that in a very efficient way?

Could it be that which is called UML? (but some queer (partial) implentation/rip by M$)

As for wrapper code -- will likley be buggy, and -- s - l - o - w -- (a wrapper for object calls -- much overhead, too much)
Posted on 2003-11-13 10:37:20 by scientica
i don't know what to think about this... i mean i learned it the hard
way that you simply CAN NOT code full apllications in assembler in a
team-oriented environment. i also learned that oop can be a great
tool no matter what.

api's are okay but i wouldn't set up an os this way either. at this time
i work on a HUGE programming interface for cad systems and after two
years of pain in the ass i finally switched to oop - everything's so
wonderfully clean and structured now. using my interface works like
a charm - it's beautiful. operating systems are perfect application areas
for oop, believe it or not.

so why switch to linux - because it's harder to call api's then? this
is ridiculous, code in C++ and do the math in asm. don't get me wrong
asm is still my favourite programming language but i use it only where
it matters - algorythyms, calculations, memory fiddling and stuff like
that.

you just have to learn that it's simply horrid to code in asm 100%,
well, at least in a company where you try to earn money and where
you're bound to time schedules.

i would only switch to linux if longhorn (or however it is called) is done
badly -> buggy, slow whatever.
Posted on 2003-11-14 10:49:48 by mob
I have nothing against OOP. You are absolutely right respect to the big working group, scheduling and maintenance.
Moreover, OOP is a method, not a language.
And IMHO, the main reason because a group of programmers in the "real world" cannot do big assembly projects is because there are not so many persons outside able to do it. But inside this community, the people are starting to joint efforts...

Longhorn does not introduce OOP in the world of user interfaces, as usual they blueprinted the idea. Currently Open Source interfaces and applications are more object oriented in many respects. And since a long time ago. Big groups are working together without ever meeting to each other.

And if M$ follows their tendence of hiding the details because the "user-programmer" is too stupid to know what is really happening inside the OS, there is no interest for me in *dotting* together a bunch of megabytes of buggy-someone-else's code.
Posted on 2003-11-14 11:40:23 by pelaillo

? The igorance of the asm language and the intrinsic things that come with each language (if you whant to learn a new language), not all the people know asm, like C/++, java, etc...


I need be more explicit here I dont whant cause misunderstoods.


Here I refer to other people learning asm, the other in your group, see that is a little dificult find a programmer that know well asm, then you normally need give a capacitation to the persons that are capable to solve the problem, but you need show a new language.

The new language will show you some things that you dont know before, this is another way to say the things, then you need change or learn to use the new language.

I have my own tought and I will cite other person here or quote, let me remember or refind the paragraph.


And here is the quote ;):
"It is always difficult to think and reason in a new language, and this difficulty discouraged all but men of energetic minds."

Charles Babbage.




Nice day or night.
Posted on 2003-11-14 11:55:04 by rea
I have a question here.... , who or how was deleted my post where I quote this, in the case that was deleted for what??, sure I dont delete my idea... :(:

? The igorance of the asm language and the intrinsic things that come with each language (if you whant to learn a new language), not all the people know asm, like C/++, java, etc...



?? any idea????


Nice day or night.....
Posted on 2003-11-15 10:33:35 by rea
I'm not sure what to think of the direction windows is going, but I don't think I'm too fond of it. Cleaning up the API is nice, but .NETing everything? I'm not too fond of the idea, although I guess it (if done correctly) could help increase security a lot - avoiding buffer overflows and such. Will be interesting to see what happens, and how much overhead it will really have. For a server OS, I'd say some amount of overhead is quite acceptable if the security+stability increases.


As for wrapper code -- will likley be buggy, and -- s - l - o - w -- (a wrapper for object calls -- much overhead, too much)

Why would it be buggy? The code to wrap will probably be rather simple. And as for "much overhead" - hm. Perhaps this would be noticable if you call silly functions like SetPixel, but for things like WriteFile etc? You shouldn't be upgrading to a new OS (or CPU for that matter) to run legacy applications anyway ^_^

mob is making a lot of sense. I dunno if the lot of you have realized it or not, but windows and the win32 API is hugely object-oriented already - even if you're not using an OOP language to program it.
Posted on 2003-11-15 11:56:48 by f0dder
Why? Guess twice, windows is buggy now and always has been, and I don't think they've learned (is Win eXP less buggy than 2k?). (I'm not just speaking of the wrapper code but the new API function code too -- which I think/belive will be a combination of old and some code + R&R (ripped 'n' raped) open source code )
Still overhead means a slow down (just like the "importance" of saving that extra two clocks in a strcpy function - but now speaking of (potentially) much more - calls and jumps - lot's of time for a task switch), but as the computers get faster I guess everybodys free to add more overbloat since it'll be hidden (behind closed source) and no one will know (or will forget) that the system would run faster without the extra overhead.
Here is a little thing about COM overhead, it's actually about PHP > ASP, but the first part is 'bout COM vs non-COM SQL http://php.weblogs.com/php_asp_7_reasons

Buffer overflows, isn't that an error >in< the API not in the method of calling them?

And what is the result of this obect orienting?

Completley leaving the windows platform is getting more and more a matter of pure actions and reality than mere attempts for me (basically it's "a few" apps that I haven't been able to find a better linux equivalent for) -- if, or when, projects like wine gets less buggy than the existing windows Posted on 2003-11-15 14:44:21 by scientica

I'm not sure what to think of the direction windows is going, but I don't think I'm too fond of it. Cleaning up the API is nice, but .NETing everything? I'm not too fond of the idea, although I guess it (if done correctly) could help increase security a lot - avoiding buffer overflows and such. Will be interesting to see what happens, and how much overhead it will really have. For a server OS, I'd say some amount of overhead is quite acceptable if the security+stability increases.


Why would it be buggy? The code to wrap will probably be rather simple. And as for "much overhead" - hm. Perhaps this would be noticable if you call silly functions like SetPixel, but for things like WriteFile etc? You shouldn't be upgrading to a new OS (or CPU for that matter) to run legacy applications anyway ^_^

mob is making a lot of sense. I dunno if the lot of you have realized it or not, but windows and the win32 API is hugely object-oriented already - even if you're not using an OOP language to program it.


What do you mean by .NETing everything?
Posted on 2003-11-15 19:37:47 by x86asm



What do you mean by .NETing everything?


I have always imagined it is this way from what I have read. A type of .NET engine will be loaded and the OS will be a kind of compiled XUML script running in that engine. All api's will just be wrapper functions that will call pre-made script components. the lowest level language in an OS of this type would be a script written in XUML :rolleyes:

I don't agree that under the method I have mentioned it will be more buggy, maybe less as there will be a central engine that sets truly global standards. I am sure however that it will be very slow and like Windows it will need a processor that is not invented yet and we will have to wait for it. Remember Windows NT4 and 95 were written in the days when the 486 was king but were painful to watch on those processors.
Posted on 2003-11-15 20:20:55 by donkey



I have always imagined it is this way from what I have read. A type of .NET engine will be loaded and the OS will be a kind of compiled XUML script running in that engine. All api's will just be wrapper functions that will call pre-made script components. the lowest level language in an OS of this type would be a script written in XUML :rolleyes:

I don't agree that under the method I have mentioned it will be more buggy, maybe less as there will be a central engine that sets truly global standards. I am sure however that it will be very slow and like Windows it will need a processor that is not invented yet and we will have to wait for it. Remember Windows NT4 and 95 were written in the days when the 486 was king but were painful to watch on those processors.


yikes, thats scary!
Posted on 2003-11-15 21:53:42 by x86asm

I am sure however that it will be very slow and like Windows it will need a processor that is not invented yet and we will have to wait for it. Remember Windows NT4 and 95 were written in the days when the 486 was king but were painful to watch on those processors.
I was told the spec machine that Longhorn is aimed at, it was 2GB RAM (probably on a really fast FSB), and a 4GHz processor with HT, plus as previously mentioned you will need a good graphics card. And of course these systems will be 64 bit. These stats were told to me by an MS representative, but you have to remember that they are publicly available, and are indicative only. So i wouldn't bother trying to run Longhorn on your standard 2-2.5GHx system with 512MB and a 128MB GeForce card :)
Posted on 2003-11-15 22:05:41 by sluggy

I was told the spec machine that Longhorn is aimed at, it was 2GB RAM (probably on a really fast FSB), and a 4GHz processor with HT, plus as previously mentioned you will need a good graphics card. And of course these systems will be 64 bit. These stats were told to me by an MS representative, but you have to remember that they are publicly available, and are indicative only. So i wouldn't bother trying to run Longhorn on your standard 2-2.5GHx system with 512MB and a 128MB GeForce card :)


I had assumed much worse, though that is bad enough. At some point people are just going to say my PC does everything it needs to do and send the trash back to Microsoft. Watching the blabvertisements on television for .NET I cannot see much that would warrant buying that kind of hardware. Wow, I can integrate my web page into my database, that's worth 10 grand, get out the cheque book Mabel and don't worry about that diet, we're not buying groceries this year :grin:
Posted on 2003-11-15 22:37:37 by donkey
I moved to linux... I am running Mandrake 9.1 and i tryed 9.2 dont even bother with 9.2. im going to end up moving to RedHat now since Mandrake for one is getting worse each releasse and that its made to user friendly and does not attract the programmers so you cant always find a rpm that will work with it... Example KDE 3.1.4 it takes like 12 Diff sets of source code one of them is 130MB to compile all that would take for ever and there is no Mandrake RPM but there is a RedHat version... Mandrake can normally run Readhat stuff but i dont trust it to with something like KDE..

there are a few libs out there

QT > GUI
TK > there is a KDE and a Perl Version with the same name
GTK++
GTK++ 2.0
Gnome Wrapper to some lib..

QT has a OpenGL version but cost lost of money

Mesa which is a Open GL Replacement sort of they cant claim it with out getting in trouble..

There are also other Graphics Libs..

FASM may support Linux Asm but the projects like MASM and RADAsm are not even present in Linux... there is nothing that even compairs to it that i have found...

here is a good list of software you will want to look into

GAIM or Kopete -> Kopete is alot like Trillian
Evolution is a nive Out Look Replacement..
Wine
WineX -> used for games
CrossOver this is a wine type program that can allow you to install windows programs like window media player 6.2 and use there plugins in the web browsers.. also supports other windows apps that are not plugin related..

Opera nice browser that works on lots of OS's...

Mplayer is a awsome Video Player for linux its compiles with both a command line and graphic base.. to call the graphics based one just run gmplayer..

im sure im forgetting something but if you have any questions just PM me
Posted on 2003-11-16 02:38:47 by devilsclaw

(just like the "importance" of saving that extra two clocks in a strcpy function - but now speaking of (potentially) much more - calls and jumps - lot's of time for a task switch)

Thing is, you shouldn't be using API calls for things like strcpy and other time-critical OS-independent routines, just like you shouldn't write a PutSprite function doing PutPixel calls... when you're calling OS routines (the ones you really need because you don't have driver access etc), you're usually calling large things like WriteFile, socket functions, GDI manipulation routines. Things that either aren't time-critical, end up hardware accelerated, or are I/O bound anyway. Thus, the slight overhead shouldn't matter.


Buffer overflows, isn't that an error >in< the API not in the method of calling them?

Both, but more often in the calling code - like supplying a too small buffer, etc. While I'm not still too fond of .net (because of my assembly roots, and thus emotional rather than objective opinions), there's a lot of built-in facilities for security and buffer overflow elimination. It's not the holy grail, just like PAX isn't, but it helps. Dunno if they're going to enable these checks everywhere as they do slow stuff down a bit - but everyone's bitching they want a secure OS, right?


Remember Windows NT4 and 95 were written in the days when the 486 was king but were painful to watch on those processors.

Win95 ran just fine on my 486dx4-100 with 8 megs of ram - 9x was ugly though. It should never have existed, and MS should have concentrated their efforts purely on NT.

Linux needs to become less messy.
Posted on 2003-11-16 07:22:46 by f0dder

Both, but more often in the calling code - like supplying a too small buffer, etc. While I'm not still too fond of .net (because of my assembly roots, and thus emotional rather than objective opinions), there's a lot of built-in facilities for security and buffer overflow elimination. It's not the holy grail, just like PAX isn't, but it helps. Dunno if they're going to enable these checks everywhere as they do slow stuff down a bit - but everyone's bitching they want a secure OS, right?

Well, there should be protection for that, like exeption handling, expeptions, page faults, etc which takes care of the situation when the buffer overflows (like printing "oops..." to the terminalr-- i mean put it in a message box and kill the app).

Well linux is a little messy, at the start, but after a while it's clear that it's easier to use than windows, for instance you can (at elast more often than in Windows) find out -- (exactly) what (and why) causes crashes/errors, something which sometimes is impossible in widows.

btw, What's PAX? (haven't heard of it)
Posted on 2003-11-16 07:57:28 by scientica

My point was to say that a little overhead here, and some there adds up and bits you in the back one day, maybe my example with strlib funcs isn't the best.

And indeed it can - but some overhead is acceptable if it can emilinate a large number of exploits. We'll have to wait and see about that though, I'm not too optimistic. A major problem is still the user applications, which are often written with static buffer sizes and oldschool string handling (ie "we'll do everything outselves because we don't trust high(er)-level code) - which results in buffer overflows, format string vulnerabilities and whatnot.


Well, there should be protection for that, like exeption handling, expeptions, page faults, etc which takes care of the situation when the buffer overflows (like printing "oops..." to the terminalr-- i mean put it in a message box and kill the app).

Exception handling isn't sufficient - it requires that an exception actually takes place... the idea of exploits is to redirect program execution to code you inject - WITHOUT causing an exception. To avoid this, you need "secure code" - which isn't ever going to happen no matter how careful you are, and even if you could convince low-level (here pure C is 'low-level' too) zealots to apply more sensible programming methods. You need parameter validation, stack checks, etc.

Basically, even "this parameter can never be wrong" must be checked too - a rather humoristic example is hostile code execution on the x-box by messing with font files. Who'd ever have thought THAT would be exploited? ;-)


Well linux is a little messy, at the start, but after a while it's clear that it's easier to use than windows, for instance you can (at elast more often than in Windows) find out -- (exactly) what (and why) causes crashes/errors, something which sometimes is impossible in widows.

Linux is very messy. "DLL hell" is even worse there than on windows (not that I ever had DLL trouble on windows). There's no proper distinction between "the system libraries" and libc. Lack of standards between distros, etc. There's no proper graphics architecture (dunno if DRI or whatever they're calling it now will change this), and if you want hardware acceleration, you're sortof limited to nvidia+opengl. The user/group scheme is very inflexible, and proper ACLs are only just starting to surface in the *u*x world. I would like an alternative to windows, and it would be fine if it was opensource... I find linux too anarchistic and zealous though.


btw, What's PAX? (haven't heard of it)

A patch to do something that's theoretically impossible on x86 - make the stack non-executable. As you may or may not know, the x86 paging mechanism has various flags (read/write/supervisor etc), but you cannot control executable property per-page. PaX does this, through some pretty hairy lowlevel knowledge. It was written by some of the really old cracking/RE people, and later ripped off by Teo-whats-his-name for OpenBSD (and of course the guy denies that he ripped off PaX - so much for opensource honesty ;-)).
Posted on 2003-11-16 08:20:21 by f0dder


Thing is, you shouldn't be using API calls for things like strcpy and other time-critical OS-independent routines, just like you shouldn't write a PutSprite function doing PutPixel calls... when you're calling OS routines (the ones you really need because you don't have driver access etc), you're usually calling large things like WriteFile, socket functions, GDI manipulation routines. Things that either aren't time-critical, end up hardware accelerated, or are I/O bound anyway. Thus, the slight overhead shouldn't matter.


Both, but more often in the calling code - like supplying a too small buffer, etc. While I'm not still too fond of .net (because of my assembly roots, and thus emotional rather than objective opinions), there's a lot of built-in facilities for security and buffer overflow elimination. It's not the holy grail, just like PAX isn't, but it helps. Dunno if they're going to enable these checks everywhere as they do slow stuff down a bit - but everyone's bitching they want a secure OS, right?


Win95 ran just fine on my 486dx4-100 with 8 megs of ram - 9x was ugly though. It should never have existed, and MS should have concentrated their efforts purely on NT.

Linux needs to become less messy.


ya Fodder same thing I was thinking, I'm not bothered too much by this because in my code I usually don't rely on API functions for most of the time my code runs, the only problem I would have with this is if OpenGL and DirectX components go through this WinFX interface.
Posted on 2003-11-16 16:59:49 by x86asm
DirectX is already COM based, so you wouldn't have extra indirection - though there might be a new version, so it might be different, but with about the same overhead, I guess. OpenGL is another API, so there's not much they can do there - OGL still needs to talk to the hardware through abstraction layers, though.

And again, it shouldn't matter much - it might if you have built your 3D engine around vertex output calls, but that's going to be horribly inefficient anyway. For something like vertex+indexbuffers, a little overhead wont matter.

But again, all this is guesses and speculation, I haven't grabbed a version of longhorn to test myself (and why would I, assuming I could get hold of one?). Whatever is done now probably wont reflect the final performance very well anyway.

I think it's interesting to use hardware acceleration for the GUI (much more than being done now) - I do however fear it will be used to add even more ludicruous eyecandy, perhaps to the level of some of the more funky linux VMs, instead of just moving the UI task to the GPU.

Oh, and is it true that they plan on running the entire system as .net? Would be an interesting idea... while I don't necessarily like .NET, this could mean we could finally break free from the clutches of x86...

Oh well, we'll see what it all comes down to. I'm staying with win2k for a while, it serves my purposes quite fine, is rather stable, and fast too.
Posted on 2003-11-16 17:17:48 by f0dder
Fodder you are wronge about the graphics on linux.. nvidia uses GLX and ATI Uses DRI... i think the recent releass of the ATI moved to GLX...

there is also a standard called video4linux which is used for video importing..

there are two main standards for linux GUI GTK and QT

then there is the SYSTEMS Libs... and libc

the standards for OpenGL on Linux is Mesa the DRI and GLX intergrates with the Mesa and XFree86 to allow Hardware acceleration.. so then you only need to use Mesa's libs to use opengl..
Posted on 2003-11-16 19:13:58 by devilsclaw