A small bug was detected in the Client object today which results in the 'Pending Reads' counter being incremented more than once - leading to a situation where the only pending read is discarded, so we can't receive anything!
I guess it's about time I posted more updates.
I guess it's about time I posted more updates.
Ouch, hit the attachments limit!
The LobbyServer and LobbyClient protocol handlers I've included are very recent work, and are not really part of NetEngine as such, however I thought some of the people who've been following this thread may be interested in taking a look at what a full-blown protocol implementation looks like.
It's possible to write a protocol handler which deals with both the client and server sides, however this would require that I clearly label the Client object as being inbound, outbound or listening - which I may do in the near future as it would allow the code in the NetEngine.GoodbyeClient method to be more intelligent, eliminating the requirement to search the three client pools for the redundant client.
If I do this, I could in theory use a single client pool, but the catch is that it would take potentially three times longer to search for the redundant client, which might not be a good thing for a server application which is being attacked.
Two good reasons for using separate client and server protocol handlers are that it eliminates all the switch logic that would be mandatory in a two-sided protocol handler, and it makes implementing and debugging the protocol more simple... not to mention making it clearer to read.
I really don't see a lot of advantages in trying to blend them into one object / one file, or situations where this might be desirable, however it would be a trivial thing to take a pair of working / debugged protocol handlers and splice them together.
The LobbyServer and LobbyClient protocol handlers I've included are very recent work, and are not really part of NetEngine as such, however I thought some of the people who've been following this thread may be interested in taking a look at what a full-blown protocol implementation looks like.
It's possible to write a protocol handler which deals with both the client and server sides, however this would require that I clearly label the Client object as being inbound, outbound or listening - which I may do in the near future as it would allow the code in the NetEngine.GoodbyeClient method to be more intelligent, eliminating the requirement to search the three client pools for the redundant client.
If I do this, I could in theory use a single client pool, but the catch is that it would take potentially three times longer to search for the redundant client, which might not be a good thing for a server application which is being attacked.
Two good reasons for using separate client and server protocol handlers are that it eliminates all the switch logic that would be mandatory in a two-sided protocol handler, and it makes implementing and debugging the protocol more simple... not to mention making it clearer to read.
I really don't see a lot of advantages in trying to blend them into one object / one file, or situations where this might be desirable, however it would be a trivial thing to take a pair of working / debugged protocol handlers and splice them together.
I've made another couple of small improvements to NetEngine... just a couple of minutes out of my life, but I suspect these small improvements are worth a lot more than the two minutes they took to implement.
#1 - The 'anti-zombie' thread is not started until the first call to Listen is made... therefore, we don't have a useless thread executing in the background of non-server applications.
#2 - There are now two layers of protection against zombie attacks:
A: Having been Accepted, the inbound client must send at least one complete and valid packet within 5 seconds or be booted (this test already existed).
B: Having passed test A, the inbound client must send at least one valid packet every 10 minutes to remain connected (this test is new).
Here are the updated files.
#1 - The 'anti-zombie' thread is not started until the first call to Listen is made... therefore, we don't have a useless thread executing in the background of non-server applications.
#2 - There are now two layers of protection against zombie attacks:
A: Having been Accepted, the inbound client must send at least one complete and valid packet within 5 seconds or be booted (this test already existed).
B: Having passed test A, the inbound client must send at least one valid packet every 10 minutes to remain connected (this test is new).
Here are the updated files.
The more complex a project becomes, the more prone it is to subtle bugs, even when you wrote comments warning about them.
I just found another minor bug in the Client class, its related to the recently-implemented code for attempting to reconnect a connection which was unexpectedly terminated.
I won't post it now as its a minor fix and is highly dependent on the protocol handler (in this case, LobbyClient).
Basically, if the LobbyServer detects a connection that has been idle too long, it will terminate it - the LobbyClient will attempt to reconnect, which is really not necessary in this case, and then we hit an int3 I planted earlier in development.
I just found another minor bug in the Client class, its related to the recently-implemented code for attempting to reconnect a connection which was unexpectedly terminated.
I won't post it now as its a minor fix and is highly dependent on the protocol handler (in this case, LobbyClient).
Basically, if the LobbyServer detects a connection that has been idle too long, it will terminate it - the LobbyClient will attempt to reconnect, which is really not necessary in this case, and then we hit an int3 I planted earlier in development.
Actually, the 'reconnect' bug is a little more complex..
When the Server 'unexpectedly' disconnects a Client, we end up with a 'half open connection' ... the Client will receive a 'ECONNRESET' error message, which means 'the other party has terminated the session' ... however, if we immediately try to reconnect, we'll get 'WSAEISCONN' (10056) error, which means 'the socket is already connected' !!!
Even though the session has terminated, and the Client knows about it, the Client's socket remains in the Connected state, a 'half open' connection !!!
In response to a Connection Reset, we should immediately close our socket... I'll add this later, but I'm thinking about whether it's viable to write a general-purpose 'OnIOError' event handler in NetEngine to deal with all the various errors we might encounter, rather than dealing with them throughout the engine.
In theory, the call to this method would belong somewhere near the end of the Worker thread.
There is another thing I am thinking about implementing too - it may be possible to eliminate the IOJobs pooling scheme in favor of a couple of embedded IOJobs within each Client object.
The benefits include more speed, less complexity, and guaranteed order of operations across N threads with no mutex needed.
The main drawback is that we can't just pump out Jobs to perform a large transfer, and we can't collect partial IOJobs for large transfers... we need a new way of 'accumulating' or 'aggregating' data in both directions, which introduces more 'memory fragmentation', the costs of reallocating and moving data, etc.
When the Server 'unexpectedly' disconnects a Client, we end up with a 'half open connection' ... the Client will receive a 'ECONNRESET' error message, which means 'the other party has terminated the session' ... however, if we immediately try to reconnect, we'll get 'WSAEISCONN' (10056) error, which means 'the socket is already connected' !!!
Even though the session has terminated, and the Client knows about it, the Client's socket remains in the Connected state, a 'half open' connection !!!
In response to a Connection Reset, we should immediately close our socket... I'll add this later, but I'm thinking about whether it's viable to write a general-purpose 'OnIOError' event handler in NetEngine to deal with all the various errors we might encounter, rather than dealing with them throughout the engine.
In theory, the call to this method would belong somewhere near the end of the Worker thread.
There is another thing I am thinking about implementing too - it may be possible to eliminate the IOJobs pooling scheme in favor of a couple of embedded IOJobs within each Client object.
The benefits include more speed, less complexity, and guaranteed order of operations across N threads with no mutex needed.
The main drawback is that we can't just pump out Jobs to perform a large transfer, and we can't collect partial IOJobs for large transfers... we need a new way of 'accumulating' or 'aggregating' data in both directions, which introduces more 'memory fragmentation', the costs of reallocating and moving data, etc.
This project is available in the ObjAsm32 public library under the name "NetCom" :)
OA32 users probably already have it.
Don't forget to check for updates with the Updater tool !
OA32 users probably already have it.
Don't forget to check for updates with the Updater tool !