(following up from this thread)


One more thing to note is that the "WaitForSingleObject" locks the calling thread until the event object's state is changed to "signalled". Instead of that, I think, depending on your needs, it is better to create an event object using the "CreateEvent" Win32 API. Then create a thread using the "CreateThread" Win32 API and check the state of that event with the "WaitForSingleObject" with really small values for its time-out. Then you will have a running program while the other process is running.


That would be silly, XCHG, and almost as bad as the typical hutchhacky code that polls GetExitCodeProcess...

Either you WaitForSingleObject on pi.hProcess and block until the process is done (works great, no CPU usage, etc.) or you use MsgWaitForMultipleObjects if you don't want to block your messageloop (I think edgar/donkey has an example of that somewhere).
Posted on 2007-09-20 08:09:18 by f0dder

That would be silly, XCHG


Why exactly would that be silly? How do you think Windows handles events? It creates a separate thread, not accessible/visible to the programmer, to watch the state of a certain event handle.
Posted on 2007-09-20 14:34:48 by XCHG


That would be silly, XCHG


Why exactly would that be silly? How do you think Windows handles events? It creates a separate thread, not accessible/visible to the programmer, to watch the state of a certain event handle.

Have a look at "Windows Internals" (formerly known as "Inside Windows 2000"), and/or any decent book on OS/kernel internals... trust me, it doesn't do it the way you're saying :)

The short version: when you Wait on an object, the thread is removed from the ready-list, and thus takes no CPU time, it isn't even considered for scheduling. The thread is also added to a "waiter list" for the object it's Waiting on. Whenever the object is "ready" or "triggered", the threads on the waiter-list are awakened again (unless of course they're waiting for multiple objects with waitall=true).
Posted on 2007-09-20 16:32:01 by f0dder



That would be silly, XCHG


Why exactly would that be silly? How do you think Windows handles events? It creates a separate thread, not accessible/visible to the programmer, to watch the state of a certain event handle.

Have a look at "Windows Internals" (formerly known as "Inside Windows 2000"), and/or any decent book on OS/kernel internals... trust me, it doesn't do it the way you're saying :)

The short version: when you Wait on an object, the thread is removed from the ready-list, and thus takes no CPU time, it isn't even considered for scheduling. The thread is also added to a "waiter list" for the object it's Waiting on. Whenever the object is "ready" or "triggered", the threads on the waiter-list are awakened again (unless of course they're waiting for multiple objects with waitall=true).



So quoting you, Windows does NOT create a thread to watch over the state of an event object. What is important now is: how does Windows realize whether the the state of an event object is "signalled" or not?

About the "short version": I don't think I was even discussing the program's calling thread! I was talking about the threads created by the Windows OS to watch on the state of event objects.
Posted on 2007-09-21 02:51:20 by XCHG

how does Windows realize whether the the state of an event object is "signalled" or not?

Simple, once the event goes into "signalled" state, the threads on the object's waiter-list are awakened. There's no "continuousl monitoring" going on, the code responsible for signalling an event also wakes up the listeners.

Since different object types are signalled due to different events, there's a lot of places in the kernel that's responsible for this code...

Anyways, "Inside Windows 2000" chapter 3, the synchronization part, describes this all very well (and a lot better than I can ;)). Bottom line is, no, there's no hidden threads or other voodoo.

A very basic example would be calling SetEvent - in this case, it's basically the kernel mode version of SetEvent that's responsible for waking up the waiting threads.
Posted on 2007-09-21 04:46:23 by f0dder
With the "SetEvent" Win32 API, the programmer is setting the state of an event object to "signalled" and that will obviously give the Windows OS an opportunity to see the list of waiting threads for the thread that is the owner of a specific event object.

However, with "WaitForSingleObject", you are NOT doing anything and Windows has to wait until the state of an event object is set to "signalled" in the given time-out. So how do you justify this whole "waiting"? According to your posts, it happens "magically" and Windows, just out of the blue, notices that the state of an event has been set to "signalled". I don't know if it is just me or... but I don't buy that. Windows, either way, should set up a mechanism to check on the state of event objects associated with a thread. Now it will set up another thread to watch over those; it can check those in its IRQ0 handler; it might do it with periodic interrupts or perhaps something else but the bottom line is that it must check on the state of event handles associated with a thread periodically in order to detect the change in their state.

About voodoos and such; everybody knows that Windows is full of these tricks. The GUI is literally hardwired into the OS. What sane OS does that? I am not trying to raise a flame war here; so I guess it'd be better for you to lock this thread or shoot me in the head  :P
Posted on 2007-09-21 06:01:28 by XCHG
It's by no means magic, it's simply a decently designed scheduler :)

The scheduler schedules threads. Threads can be in various states, like "running", "ready to run", "waiting". It has the concept of a "ready-list", which (surprisingly enough :P) is a list of threads that are ready to run. If your thread isn't on a the ready-list, it's not even considered for scheduling, and thus uses no CPU time.

A threads that is waiting is moved off the ready-list. Waitable objects have a list of threads that are waiting on them, and the thread is of course added to that list. Once a waitable object is triggered (how it's triggered depends on the type of object), this list is processed, and threads are moved back to the ready-list if they aren't waiting on other objects as well.

Of course if a time-out is set for the wait, there will be some periodic checking to see if the time-out has elapsed, initiated by the timer IRQ, no way to avoid that, obviously... but the time spent doing this is minimal compared to a typical polling approach. And waits without timeouts are used often, too (GetMessage, ReadFile, ...).

Now, the beautiful thing is that (apart from timeout checking of course) there isn't any "monitoring of events" going on. Consider waiting on a process or thread - those waitable objects are triggered as part of process and thread termination, respectively.

Of course some events get triggered as the result of an interrupt or other interaction; ie., a console input event is usually because mouse or keyboard input. A waitable timer of course depends on a timer IRQ. But this is triggered via interrupts, not by polling (in your words: "check on the state of event handles associated with a thread periodically").

Again, check out Windows Internals, it explains it better than I can... it's hard to explain it short anyway, because there's so many interlocking aspects: scheduling, synchronization, object management, etc.
Posted on 2007-09-21 06:42:45 by f0dder
Further reading for XCHG, with definite timing-results:
http://www.asmcommunity.net/board/index.php?topic=25803.0
Posted on 2007-09-21 14:15:49 by Ultrano
exact quote :) :

"blabla... how will windows know object changed etc... blabla... must look periodically... blabla...wont know it by divine power etc... blabla. word. "  :)

i think what xchg misses is :

how do you think the object gets its state changed in the first place, mmh?

short answer: its always windows that changes the state in the first place, so it's really not very difficult for him to call your crappy callbacks :) at this precise moment without waiting a nanosecond more. makes sense now?

-either it's some other code that triggers the state change: an api call, a routine in the OS that is called subsequently to another action... in this case, i guess windows leaves room at the end of this code (that changes the relevant object's state) to sequencively call all the userdefined code (ie you callbacks) of the threads that were concerned about this object changing its state. No different threads needed, all is called synchronously.

-or it's changed by some IRQ: user event like keyboard or mouse, or the timer...this is irq code that will be called regardless of everything else. Then i guess windows again leaves room a the end of this code to call your code that was waiting for the object, or (more likely?) will set some flags to later change the state of the relevant objects... which falls back to the first case.



i hope it sheds some light.

this is closely related to "the minimal amount of multitasking needed to make an OS like this work", which is only one, i believe. ONE thread for the whole OS, plus the unavoidable hardware IRQs for external events and timer. thats all. The IRQs briefly buffer some bytes and change some flags, and in the background the OS loop, well, loops forever locking these flags and bytes and updates things accordingly. The ONLY need for atomicity is atomicity between the OS and the IRQs. Then if you want you add your first app, that gets a chunk of code executed after the OS loop, then you add another app, etc...

of course things likely are more complex than that, the OS loop is itself fragmented in chunks etc...but many threads for the OS alone is theoretically absolutely not needed.

fodder, please correct me if i'm wrong :)

oh, and having the GUI be run as a server on top of everything else is the damn _best_ way to get a sluggish piece of unresponsive crap under your mouse... yeah you, i'm talking about X/linux here :D ... although i have to admit that the huge amount of processing power today seems to finally have swallowed this sluggishness  :)

solar os is non-preemptive, and seems to work :)
There also was an old OS on acorn processors (i read so) that was nonpreemptive.

sigh...

all this confusion with polling and multithreading , for many people i guess, is due to the preponderance of unix in the academic fields, and in the teaching of computer science...

while thinking in terms of multithread _can_ be nice, doing everything with polling, mutexes, producer/consumer etc... its imho not the right way of doing things, it's overkill and ignorance of the machine... as long as the hardware doesnt work this way.

All these are _abstractions_ offered by the OS, thats all. Oh wait, let's make two threads that send things to each other through a stack/filo... yeah, right, as if you couldnt _call_a ****ing function! this s also induced by the flat memory model and again the unix concept that threads are isolated, so its just plain a pain the... neck to make them communicate simply so thats why you resort to locking objects and such... and you've got to wait for a whole OS loop just to get your data get to the other end.

Then, of course where i'm screwed, is when nowadays we've got real hardware multiprocessing, all this "nonsense" suddenly comes in very handy because research on atomicity problems, concurrency, deadlocks etc has already been studied :)

oh well , that'll be all for today, kids :) , i think i'll multitask to something else now :D

Posted on 2007-09-25 03:54:43 by HeLLoWorld
oh, and don't try and start me up on fork() :D
Posted on 2007-09-25 04:04:29 by HeLLoWorld
Did unix originally even have threads? Took a while before it was implemented on linux afaik, because "our processes are so lightweight we don't need threads"...
Posted on 2007-09-25 07:42:08 by f0dder
i dont know if unix did have threads... in fact, i never grasped the difference  between processes and theeads and i use the two words equally (and i'm wrong) ... well i guess its more or less that threads are different parts of a process that run using the same adress space so they can share data? sounds cool... so what i said about common variables isnt true...that way you could also call shared code? not bad... well i guess i should ve known since i even must have done it in my courses. so what changes when thread switches? just eip?
Posted on 2007-09-25 08:01:23 by HeLLoWorld
ok, i've read (some) wikipedia. that's it : threads share adress space, and as such data seg, code seg, librairies, handles etc..

wht changes in a thread switch is eip, regs and stack seg... no wonder its faster (even if still yuck) than processes, where you've got to rape data cache and all page tables... duh! and wikipedia says "on NT/OS2 threads are cheap and processes are expensive, while on other systems theres not much difference... well, i guess that just means on "other systems" the change of page tables is  nothing compared to the "base bloat" background noise.
Posted on 2007-09-25 10:00:41 by HeLLoWorld
In OS theory, every active sequential computation is a process. So what are called tasks and threads are also processes, as far as theory goes.

When processes are implemented in a real OS, isolation of resources becomes formalized, and in Windows, terminology is adjusted to distinguish between raw OS processes that can freely share resources (threads), and groups of processes that are isolated somewhat from each other (Windows processes).

Vendor specific terminology is not new. What most of us call files are known on one series of mainframes as a dataset, since the 1960s.
Posted on 2007-09-25 12:38:19 by tenkey

ok, i've read (some) wikipedia. that's it : threads share adress space, and as such data seg, code seg, librairies, handles etc..


Yep, the concept of "threads" becomes more clear when you approach OS Development. They are just a name for clearly distinguished routines within a program *space* that have been scheduled for execution.

On a side note, properly coded multi-threaded applications can lead to more efficient parallel processing for systems with more than one CPU/Core.


wht changes in a thread switch is eip, regs and stack seg... no wonder its faster (even if still yuck) than processes, where you've got to rape data cache and all page tables... duh!


That assumes a perfect process/thread scheduling scenario. This can change when one process or even one thread blocks. The OS must continue to cycle through processes/threads without wasting *too much* time on excessive scheduling techniques. Unless two threads within the same process are scheduled back-to-back, you will most likely have to go through the fun of trashing the cache and reloading the PDBR.
Posted on 2007-09-25 12:38:50 by SpooK
tenkey: things have developed and progressed a bit since OS theory was invented back in the dinosaur age ;)

In windows terminology, a "process" is basically a container of resources: kernel objects (which can be files, sockets, pipes, mutexes, etc.), a memory space, and thread(s).

A thread has a thread context, which includes the CPU registers "and some other stuff", and each thread has it's own stack.
Posted on 2007-09-25 16:03:30 by f0dder
For once, I can't complain about 'progress' - I think these are natural evolutions, and are robust and well-defined concepts, things I don't normally associate with Bill$oft
Posted on 2007-09-26 09:02:07 by Homer

Did unix originally even have threads? Took a while before it was implemented on linux afaik, because "our processes are so lightweight we don't need threads"...

FreeBSD had threads by 2003 so Unix surely had them before then.  Unix did not have threads before 1989.
Posted on 2007-09-27 21:02:21 by drhowarddrfine