I've thrown together a Server which uses IO Completion Ports.
It's strongly based on James Ladd's FASt Server project, so don't expect TOO much.
(By this I simply mean that it does what James' server does, no more, no less.)
It currently operates as an echoserver on port 9080.
You can telnet to yourself and try it, but be quick, it shuts down in 30 seconds :)

http://homer.ultrano.com/Upload/Server.zip

And please guys, how about a little feedback?
Anyone interested in the source can perform a musical number right here, or alternatively, message me.

James, I liked your approach to Plugin code.. I took that one step further, and implemented the Server itself in a separate DLL.
The Server and Protocol plugins are both OOP classes encapsulated in DLL form.
The driving executable simply loads the Server DLL and calls its only exported function.
This creates an instance of Server object.
In turn, within Server's constructor method, it will load the Protocol plugin from given name, create an instance of the Plugin object, and store its pointer within the Server object.
Like you, I don't actually do anything with the Plugin yet, I haven't implemented any kind of Collections (arrays, lists, whatever terminology you prefer), it's basically your published source transliterated to ObjAsm32 oopasm model and shoved in two DLLs instead of one.

ps : I took the liberty of making a whole bunch of optimisations, mostly at the register level.
I also replaced a bunch of two-line conditional jumps with one-line macro..

JmpCase macro Condition, Place
.if Condition
jmp Place
.endif
endm

This generates je/jne opcodes, so there's no cost.. and it makes the source a lot smaller..

? ? ? ? ? ? mov edx, .XOVERLAPPED.operation
? ? ? ? ? ? JmpCase edx==OPERATION_ACCEPTED, main_io_worker_accepted
? ? ? ? ? ? JmpCase edx==OPERATION_READ, ? ? ?main_io_worker_read
? ? ? ? ? ? JmpCase edx==OPERATION_WRITTEN,? main_io_worker_written

Heh.

Biterider, the driving executable detects for DebugCenter's regkey, and warns the user if they haven't got it, and provides them a URL to the file (on my dump), along with simple instructions ("put it somewhere safe, execute it once, then you're good to go").


Posted on 2005-06-11 10:55:01 by Homer
kewl.
Posted on 2005-06-12 01:48:31 by James_Ladd
I've made a milestone improvement and re-uploaded the Zip.
The server no longer hangs on Accept waiting for atleast one byte of data to be sent by the Client.
It now Accepts immediately ("null-bufferlen trick") which means that I can implement some kind of timed culling of "zombie clients" (those who connect, then never send a single byte).
I'm yet to receive any legitimate feedback from you guys :|

I'm thinking about implementing the same zombie-cull mechanism as I did in the "Banked" socket model, which is a timed event per Client, such that the Client automatically (and asynchronously) "culls itself" from the Client pool without the need for any polling on my behalf (no "maintenance" cycle in Worker, and no special-purpose Thread).
Basically, when a Client is Accepted, we create a Timer object which will fire within say 10 seconds unless it is prevented from doing so. Code in the Recv handler resets the Timer to something more friendly like 2 minutes, so on reception of data, the "zombie cull" mechanism becomes an "idle cull" mechanism, reset each time data is received.

OnConnect zombies have yay many seconds to speak up, or bugger off.
Connected clients have yay many minutes to speak up, or bugger off.

Anyone have a better idea?
Posted on 2005-06-12 02:50:58 by Homer
why giving 2 minutes to connected clients? i think 60 seconds is far more than neccessary? :|  also - if the client connects, he may try to send 1 byte of data per 30 seconds, or so. there should be a minimum to send (for example 1 full line of command in HTTP). also - if the clients sends junk, it should be dosconnected, and -if this is repeated for n-times - banned.
Posted on 2005-06-12 12:11:38 by ti_mo_n
The values I gave are subjective examples, and could be altered in realtime.
The actual values would depend highly on the protocol being implemented by the server.
For example, one minute may be unreasonably low for a chat server.
In my particular case, I am implementing a p2p protocol, and as such, two minutes sounds reasonable in theory.. practical values can only be determined by trial and error.
Don't focus so much on the numbers, instead focus on the theory behind them.

Has anyone actually bothered to TRY the demo?
I do log all hits to my dumpsite, and I see downloads, but no feedback regarding the demo :|
Posted on 2005-06-13 10:06:17 by Homer
I could connect, and everything i sent was displayed in "received" window 1 char per line. So i think it works..? :)

About those times -- I thought it was going to stay like that :P

Make a configurable .cfg file, like:
Port = 80
ZombieTimeout = 30
ClientTimeout = 120

etc.

And why the server quits if there is no input for some time (30 seconds?) ? (i assume it is NOT going to stay like that?)
Posted on 2005-06-13 20:44:22 by ti_mo_n
The demo kills itself after 30 second, no matter what happens.
The reason is simply that I am lazy, and while debugging and testing, I don't want to have to kill the server process manually (there's no window, so there's no close button to press.. I'd have to terminate the process manually, and that requires effort !!)
I can remove the self-destruct very easily once I'm happy that everything is behaving as intended.

Thanks for the feedback, that's exactly what it's meant to be doing right now.
The debug code which prints received data automatically adds linefeeds, so we see each keystroke on a separate line.
Just one question - does the server Accept immediately on Connect, or only after the first keystroke (first receive)? The purpose of the most recent modification was to Accept immediately, and not require ANY data be sent..
The differences between this code and James' version currently boil down to:
A) - The buffer length is set to NULL for calls to AcceptEx (thats the "null bufferlen trick").
B) - On completion of Accept, we queue a Recv instead of a Send, since we are yet to receive any data.
C) - I take a little more notice of the "bytes" parameter and don't assume everything is single-byte IO.

You might notice that you cannot currently make more than one client session simultaneously - this is because there is currently no "client pooling" implemented. I'll be addressing that next.
Also, the "plugin protocol DLL" is not currently being employed... that'll change too :)
I've written (and posted) a basic Client class as the first step towards a client pooling system.
I intend to use the OA32 class called "Collection" to implement pooling.
It's simply a manager class for storing a bunch of arbitrary objects.. it manages an array of pointers, and uses the "pull-down" method to "close holes in the array" created by arbitrary deletions.
It's perfectly suitable for this kind of thing, and already contains code to automatically "sweep" the array and destroy all contained objects when it is itself destroyed..

I'm still hoping to hear some more opinions on the proposed zombie culling mechanism ... also, what are your thoughts regarding the Debug support? Personally I was pretty impressed.. this stuff is all standard within the OA32 model :)

Posted on 2005-06-14 00:28:52 by Homer
Another milestone was achieved today (zip updated, same old url as previously).
The proposed Client class was implemented : and Client Pooling was implemented via an instance of ObjAsm32's Collection class :)
All the "acceptor" code was moved to Client class.. the Client class constructor method Client.Init contains the code from "InitAcceptors", and under a new Method called Client.Accept we find the old code from "AcceptAcceptors".
The Client class is essentially a wrapper for XOVERLAPPED, the "Extended Overlapped" structure.
Server.acceptors dword data member is now a Pointer to the Client Collection object.
The Collection is initialized to hold up to 10 thousand Clients :)
I create 500 Clients in the latest beta demo, and have tested that I can create multiple sessions successfully :D Realistic limit for a single-cpu machine is supposed to be 5000 according to the wisdom of C coders, but who's to say? We'll find out soon ...

Have a nice day :)
Posted on 2005-06-14 05:42:20 by Homer
I'd just like to add that I have since tested with 5,000 and 10,000 clients.
The 10,000 limit of Collection was an artificial limit imposed by myself.. I'm not sure what XP's limit for open socket handles really is, but I do have a bunch of error checking and I'm finding 10,000 succeeds on my (wait for it) 333mhz machine with 48 meg of ram :)
Posted on 2005-06-14 06:55:46 by Homer
Progress Report :
Last minute addition of code to recycle Client objects without scavenging them.
At first I was trying to recycle the socket handle, until I read on the net that on recv of 0 bytes (client disconnected) that the socket is literally removed from the iocp...
I considered destroying the defunct Client object (remember, a Client is a container for the XOverlapped containing the SocketHandle, Buffer, etc). But this would mean using existing code in Collection class to exhaustively search the Collection for the Client, destroy it, deallocate buffer, remove client from collection, then recreate client, reallocate buffer, and put client BACK in the collection.. really, REALLY inefficient.
Then I realized I don't have to.
I have created a new Client.CreateSocket method from part of the Client.Init code.
Now I can call that to recreate the Client.hSocket and then call Client.Accept again to requeue an Accept job, thus avoiding the scenario mentioned previously.
It's working :D

However, that's not under load .. if there's >1 io job posted for a given Client at once, then we'll get >1 null receive and try to destroy the client more than once also.
At the moment, the client uses a single overlapped structure.
This will have to be replaced with a collection of outstanding io structs, and a counter for them, if I want this thing to not fall over under load, and to be truly asynch..

James, I need to pick your brain - I still get the feeling you've done this before :)

Posted on 2005-06-14 11:17:06 by Homer
The silence was deafening..
Posted on 2005-06-14 19:42:17 by Homer
EvilHomer,
If your example is still based on my early code then you wont have the Send modification that ensures sending doesnt create an overlapped IO.
This removed the need for these send completion to be queued. Ill send you some example code soon.
Also, if you havent already done so, use Io socket control to turn off the send and receive buffering. IO_SND_BUF or something similar.
Again, Ill look this up and let you know.

If you want to ask more detailed questions, then go ahead. Ill try to get back to this page twice daily to reply.

Rgs, James.

Posted on 2005-06-14 21:59:39 by James_Ladd
My experience with asynchronous stuff is quite limited.
I'm not sure what to expect !!
Basically I can't envisage a situation where eg multiple Recvs for a given Client would be a necessity.. however I can appreciate the potential collision of concurrent read/write operations..

As for network protocols, I am most comfortable with binary protocols, but in my mind I use the example of a webserver, where eg a client requests a file, the server begins transmitting the file, then the Client unexpectedly requests Some Other File .. probably most asynch webservers would cancel all outstanding IO and then queue the new job.. but I am really guessing at this.

Now that I have implemented client pooling (which was a whole 5 minutes work thanks to Biterider's Collection class) I suppose I should continue and implement IO pooling per Client, ie, a Client object should own a list of outstanding IO jobs in the form of a list of XOVL.. am I on the right track?

Regardless, my next move will be to shift the IO event handlers into the Protocol plugin where they belong.. James, I've probably learned as much as I could hope to learn from your public code, I admit to not studying your private stock carefully yet..
I will repost the Client object class right now, so you can see what was implemented there.
The most important divergences from your project code have been duly noted in previous postings.
Posted on 2005-06-14 23:42:45 by Homer
EvilHomer,

The queuing i think your referring too is not necessary for you to implement, this is what the completion ports are for.
If your HTTP server is sending pages back to the client, and the client requests another page, then you want to act
on that request. Which you would, because it was queued, you got the notification on the completion port and
acted upon it.

If during that processing several more client requests queue up, you can either peek the completion port queue
(I forget the API off hand) or just get around to each request as you dequeue it with GetQueueCompletionStatus.

You will need a buffer of Overlapped structures as you will want to issue a read for each read that completes.
If you have more than one read outstanding. You dont have to issue another read until one completes, but
there could be an advantage in having at least one outstanding read.

Depending on how you handle sends, you may also want a buffer of overlapped structures. ie: as you
fill a buffer with read data you send it, which queues the send, then read another buffer and queue that send.
You can wait for send completion notification but I would not, as its faster for you to just do the sends and
not wait to be told the sent happened. All this within reason, as there is no point in flooding the client.

I sure hope that I am helping with this response.

Rgs, James.
Posted on 2005-06-15 00:23:02 by James_Ladd
James - yes, that helped in that it adds weight to my philosophical leaning (not a typo).
I appreciate any and all feedback, and especially yours, since I suspect you have more experience in this arena than I do.
I am seriously considering implementing a http plugin, not because I have any real interest in being the guy who kicked the crap out of apache, nor because of commercial viability, but simply because web users are unpredictable, and if I can write my server to expect the unexpected, I see that as a positive :)
Posted on 2005-06-16 02:25:03 by Homer
EvilHomer,

If you are in the middle of a request and you receive another, then you cant stop any queued sends, but you could
stop further sends if you knew the new requests are for a new page and not pieces of the same page.
ie: you sent the html page, but the new requests are for gif images etc of the same page.

Im thinking for http you probably never want to terminate a request.

Anyways, please consider writing a HTTP plugin for FAStServer so we both can use it.
I do plan on being faster than apache and being more widely used. Maybe Im dreaming.

Rgs James.
Posted on 2005-06-16 02:49:12 by James_Ladd
Hello James,

How's things going? Need any muscle in any coding?  ;) Maybe I can help, you just need to fill me with some details.
Posted on 2005-06-16 06:56:54 by roticv
Wow, thanks for the offer Victor.
A new complete version of FAStServer will be published this weekend (its Firday now).
When it is id like it if you could write a plugin that does more than an Echo.
I dont care what, but suggest HTPP with or without ISAPI support would be popular.
Ill contact you directly when the lates version is posted.
Rgs, James.
Posted on 2005-06-16 19:57:59 by James_Ladd
James - thanks for your support.
I've had a couple of quiet days due to a friend of mine is having some dramas and I've been cheering him up.. I prefer to count my friends on one hand, and when my mates need me, I prioritize.
Since I got home five minutes ago, I manipulated my source such that the parameters for the Client collection are given as parameters of the Server.DLL's exported Server_Create proc.
This means that the end-user's exe has control of the ambient and maximum Client capacity of the Server component / iocp.
By ambient I refer to the number of Accepts pending at all times, and by maximum I refer to the capacity of the Client Collection.
The only other thing I did was to split the includes for the Server and Client classes into two files per class - one containing the "classdef", the other the "codebase" for each.. the reason is that if we include both classdefs before we include the codebases, we can refer to each class from within the other, making "crosscalling" within class methods a straightforward thing.
When my beer goggles are on, I'll see what I can do about adding a perClient XOVL DataCollection ... but my question for today James - can you see any good reason to the support for arrays of WSABUF in single IO operations? If you can, I'll write a special class for arbitrary struct collections, as DataCollection stores pointers to structs, it's not a linear array of structs..

I'm going to have a few beers and cheer myself up for a while.
When my beer goggles are on, I'll come back and look for a reply :)

Vic, I'd like you to see the oop source as well.
I have not spent much time on it to be perfectly honest, everything's  crazy at the moment, and I'm sure you would get a kick out of it..

Ultrano, GameServer classes soon :)
Posted on 2005-06-17 02:33:18 by Homer
I ended up creating an IOJob class, and an Allocator class which manages Used and Free collections of IOJob objects. Parts of the Client class were moved into IOJob. The Server object creates an instance of Allocator which is accessible to itself, all Client objects, Plugin module, etc.
Todo: add a Maintenance function to periodically Shrink the collections, so the Server can scale itself Down when it becomes "less busy than it has been".
Since I'm not done implementing the latest changes yet, I'm not posting a new binary yet.
Perhaps later today...
Posted on 2005-06-17 16:14:10 by Homer