From Thomas tutorial I found this piece of text which talks about using select() on non-blocking sockets:


You might have noticed that the select call looks suspiciously similar to the blocking socket timeline. This is because the select function does block. The first call tries to perform the winsock operation. In this case, the operation would block but the function can't so it returns. Then at one point, select is called. Select will wait until the best time to retry the winsock operation. So instead of blocking winsock functions, we now have only one function that blocks, select.


So this means if a server waits for a client request/response for 3 minutes then select will block for 3 minutes? That doesn't sound like the select I've been using on blocking sockets. My experience says that select could return and say the socket is neither readable or writeable and theres no error. In such cases I would just go to next socket in queue from a server point of view.

Also on non-blocking sockets I'd like to know if it does partial recv/sends. On blocking sockets my experience is that if you sent 10 kb it could split it up in smaller chunks and returned how much it sent. Would that still happen on non-blocking sockets or when I just send the whole thing it would say WOULDBLOCK until everything is sent?

The last question I have is a general threading question which probably could have been in another section.
Lets say you got a array with 20 sockets and maybe 3 worker threads on a server. You want to handle the sockets from 1 and only 1 thread at a time. This makes a requirement about a syncronizing mechanism. One problem about this is that if each thread has a index into the array from 0 to 19 and a client disconnects in the middle the count would be different and eventually you would hit a dead socket depending on your array implementation. Have any of you had this problem before and what did you do about it?
I'm coding a system in C++ STL where I have a list container that allows me to remove clients from the middle of the list when they disconnect. If I would use iterators then each thread iterator would be invalidated as soon as the list modified. To solve this I made the list into a circular list where I do the following:

1. lock list mutex
2. pop first client from the start of the list.
3. unlock list mutex
4. process this client request.
5. lock list mutex
6. push client at end of list.
7. unlock list mutex.

If the client would disconnect the steps 5 to 7 gets replaced with a "free client resources" operation instead.

Thanks in advance.

// CyberHeg
Posted on 2003-07-20 11:58:53 by CyberHeg
Originally posted by CyberHeg
From Thomas tutorial I found this piece of text which talks about using select() on non-blocking sockets:
[...]
So this means if a server waits for a client request/response for 3 minutes then select will block for 3 minutes? That doesn't sound like the select I've been using on blocking sockets. My experience says that select could return and say the socket is neither readable or writeable and theres no error. In such cases I would just go to next socket in queue from a server point of view.

That shouldn't be possible according to the documentation:

The parameter time-out controls how long the select can take to complete. If time-out is a NULL pointer, select will block indefinitely until at least one descriptor meets the specified criteria.

Also on non-blocking sockets I'd like to know if it does partial recv/sends. On blocking sockets my experience is that if you sent 10 kb it could split it up in smaller chunks and returned how much it sent. Would that still happen on non-blocking sockets or when I just send the whole thing it would say WOULDBLOCK until everything is sent?

In blocking mode, a send call should either send every byte you requested or fail. Only on non-blocking sockets, less bytes might have been sent. Keep sending until you sent everything, on WOULDBLOCK wait for the next FD_READ/FD_WRITE and continue. Receiving is the same for blocking and non-blocking except for that the blocking version of course blocks. There's no guarantee about the amount of bytes received.

The last question I have is a general threading question which probably could have been in another section.
Lets say you got a array with 20 sockets and maybe 3 worker threads on a server. You want to handle the sockets from 1 and only 1 thread at a time. This makes a requirement about a syncronizing mechanism. One problem about this is that if each thread has a index into the array from 0 to 19 and a client disconnects in the middle the count would be different and eventually you would hit a dead socket depending on your array implementation. Have any of you had this problem before and what did you do about it?
I'm coding a system in C++ STL where I have a list container that allows me to remove clients from the middle of the list when they disconnect. If I would use iterators then each thread iterator would be invalidated as soon as the list modified. To solve this I made the list into a circular list where I do the following:

1. lock list mutex
2. pop first client from the start of the list.
3. unlock list mutex
4. process this client request.
5. lock list mutex
6. push client at end of list.
7. unlock list mutex.

If the client would disconnect the steps 5 to 7 gets replaced with a "free client resources" operation instead.

Your solution sounds okay, you might want to put the queue system in a separate part so the threads can just call something like getProcessingSocket() and have the right SOCKET.

Thomas
Posted on 2003-07-20 12:24:42 by Thomas



In blocking mode, a send call should either send every byte you requested or fail.



I don't think you're right. This is contradicting with http://www.ecst.csuchico.edu/~beej/guide/net/html/advanced.html#sendall



Only on non-blocking sockets, less bytes might have been sent. Keep sending until you sent everything, on WOULDBLOCK wait for the next FD_READ/FD_WRITE and continue.
Receiving is the same for blocking and non-blocking except for that the blocking version of course blocks. There's no guarantee about the amount of bytes received.


Just to be clear I assume FD_READ/FD_WRITE events comes from a call like WSAAsyncSelect but I can use normal select() instead of the WSA version and get the same answer if the socket is readable or writeable (because of portability issues).




Your solution sounds okay, you might want to put the queue system in a separate part so the threads can just call something like getProcessingSocket() and have the right SOCKET.


Yes it's ok because it's working but it's not perfect. In future I'd like to make a small stat program which lists all users connected. If I would take a snap shot of the client list at a given time I would not get those clients included which are just beeing processed. One way of solve that is save the clients on a second temporary list so the two lists combined will give a complete view. And because of the circular list the clients would always be changed in order so I need (but I think I would do that anyway) some sorting on the client side to make sure they won't end up in random order all the time.

// CyberHeg
Posted on 2003-07-20 12:50:25 by CyberHeg
Originally posted by CyberHeg
I don't think you're right. This is contradicting with http://www.ecst.csuchico.edu/~beej/guide/net/html/advanced.html#sendall

Well I have to admit that I've doubted it as well, but I haven't found anything that contradicted my view in the documentation yet. Maybe it's slightly different on UNIX? In the PSDK, the documentation says:

On nonblocking stream oriented sockets, the number of bytes written can be between 1 and the requested length, depending on buffer availability on both client and server computers.

Nothing like this is mentioned for blocking sockets... I'll do some tests with very large buffers..

Just to be clear I assume FD_READ/FD_WRITE events comes from a call like WSAAsyncSelect but I can use normal select() instead of the WSA version and get the same answer if the socket is readable or writeable (because of portability issues).

Yes that's right..

Yes it's ok because it's working but it's not perfect. In future I'd like to make a small stat program which lists all users connected. If I would take a snap shot of the client list at a given time I would not get those clients included which are just beeing processed. One way of solve that is save the clients on a second temporary list so the two lists combined will give a complete view. And because of the circular list the clients would always be changed in order so I need (but I think I would do that anyway) some sorting on the client side to make sure they won't end up in random order all the time.

You could add a 'being processed' status to each item instead of removing and adding them again..

Thomas
Posted on 2003-07-20 13:02:22 by Thomas
A bit late, but I just did the test.. Even sending 40MB in one send with an upload speed of 16KB/s over the internet will still succeed with the full 40MB as the return value.. Even higher buffers will cause a WSAENOBUFS error but not succeed with a lower return value, while they could have done that..
It's still no evidence but it surely says something..

Thomas
Posted on 2003-07-29 09:09:58 by Thomas
Thank you for taking time to make this test.


A bit late, but I just did the test.. Even sending 40MB in one send with an upload speed of 16KB/s over the internet will still succeed with the full 40MB as the return value..


So you're saying that it will return instantly and say all 40 mb has been sent (or rather queued). I wonder what would happen on next send. If you would send 40 mb at once and then right after send another 40 mb. The second 40 mb would definetly not be sent right away because it takes several minutes to get the first part complete.


Even higher buffers will cause a WSAENOBUFS error but not succeed with a lower return value, while they could have done that..
It's still no evidence but it surely says something..
Thomas


By higher buffers you mean that you sent more data then 40 mb at once? I got scared for a min since I thought it would be a error value I missed to check for in my code. I'm a bit unsure whether I should check for this error also because right now this error code would let my program kill the connection.

I tried making some tests on LAN where I would force my connection to block or rather return WOULDBLOCK error code but it's hard since LAN is usually fast and I don't have any trafic.

It's no evidence and I don't disagree with you. However we both know that socket implementations can be different on different OS's so I could think of other platforms where it behaves differently. This is just a guess though.

// CyberHeg
Posted on 2003-07-30 02:40:21 by CyberHeg

Thank you for taking time to make this test.
So you're saying that it will return instantly and say all 40 mb has been sent (or rather queued). I wonder what would happen on next send. If you would send 40 mb at once and then right after send another 40 mb. The second 40 mb would definetly not be sent right away because it takes several minutes to get the first part complete.

Sometimes the first send blocks (when I test using localhost), but with a test over the internet, the first 40 MB was queued by winsock and send returned immediately. The second send then blocked.

By higher buffers you mean that you sent more data then 40 mb at once? I got scared for a min since I thought it would be a error value I missed to check for in my code. I'm a bit unsure whether I should check for this error also because right now this error code would let my program kill the connection.

Yes that's what I meant, send couldn't even handle a 200 MB buffer.. Stupid send :grin:

It's no evidence and I don't disagree with you. However we both know that socket implementations can be different on different OS's so I could think of other platforms where it behaves differently. This is just a guess though.

I think it's different on UNIX.. But from the windows doc, I read it the way the test showed. Still the documentation is quite vague..

Thomas
Posted on 2003-07-30 04:02:03 by Thomas