howdy, been doing some coding again after quite a long break, anyway, I just came across a problem. I defined the following winsock struct according to the msdn spec:


WSANETWORKEVENTS:
record
lNetworkEvents: dword;
iErrorCode: dword[10];
endrecord;

and then I used it in my program to catch network events on a socket, like this:

if((type w.WSANETWORKEVENTS ).iErrorCode == 0) then

here edi points at an allocated WSANETWORKEVENTS structure, and after examining the
masm output for this, since it never triggered the 'if' code, I noticed that it generated as offset, rather than . the FD_ACCEPT_BIT is 3, so hla generated it as 4+3 instead of 4+(3*4), since the iErrorCode is an array of dwords. now, this makes no sense to me. of course, I could just write .iErrorCode and it will work fine, but shouldn't the parser get the fact that the iErrorCode array is an array of dwords rather than bytes?

maybe I need to 'type' the FD_ACCEPT_BIT as dword?

ahwell, this is no biggie, just wondering if it's just me being stupid as usual.

and keep up the great work Randall!!
Posted on 2003-10-16 12:09:10 by BinarySoup

howdy, been doing some coding again after quite a long break, anyway, I just came across a problem. I defined the following winsock struct according to the msdn spec:


WSANETWORKEVENTS:
record
lNetworkEvents: dword;
iErrorCode: dword[10];
endrecord;

and then I used it in my program to catch network events on a socket, like this:

if((type w.WSANETWORKEVENTS ).iErrorCode == 0) then

here edi points at an allocated WSANETWORKEVENTS structure, and after examining the
masm output for this, since it never triggered the 'if' code, I noticed that it generated as offset, rather than . the FD_ACCEPT_BIT is 3, so hla generated it as 4+3 instead of 4+(3*4), since the iErrorCode is an array of dwords. now, this makes no sense to me. of course, I could just write .iErrorCode and it will work fine, but shouldn't the parser get the fact that the iErrorCode array is an array of dwords rather than bytes?

maybe I need to 'type' the FD_ACCEPT_BIT as dword?

ahwell, this is no biggie, just wondering if it's just me being stupid as usual.

and keep up the great work Randall!!


No!
Machine addresses are always byte addresses. HLA does not multiply constant offsets into run-time arrays by the size of an array element. You have to do that yourself. So, yes, the correct way to do this is to use FD_ACCEPT_BIT *4.

Sure, the parser could figure this out, but then you would have two different semantics based on whether you had a constant or a register index. That would be ugly!
Cheers,
Randy Hyde
Posted on 2003-10-16 21:24:22 by rhyde
thanks for the quick and precise answer Randall, and yes, I see what you mean.
Posted on 2003-10-17 07:14:24 by BinarySoup