"When in doubt, RTFM".
LES does exactly what Intel manual says about it: loads far pointer (16:16 into es:bx in your case). Segment register hardly can be useful in this context.
Are you using DOS extender? Which one? Most of them use flat model, not small.
You're calling int21/3F DOS service with ds:edx pointing to single dword read buffer, buf (crc32tab immediately follows it), yet passing 0FFFFFFFFh-offset buf (not much less than 4GiB) as it's size. Comments about dynamic storage will not allocate it automagically. Even if DOS service successfully reads from that file, it will overwrite crc32tab and, possibly, your code.
What exactly are you trying to do? For example: write real-mode program with some 32-bit registers usage; write 32-bit flat-mode program for some DOS-extender; etc.
I want a checksum program like the code I posted that will work on larger file sizes.
If it can be done without 32 bit registers, that would be fine too.
Thanks.
Posted on 2009-10-26 13:21:21 by skywalker
CRC-32 calculation can be done even bitwise, it has no restrictions of that kind. 16-bit calculation is not more complex than 32-bit.
OK, let's outline real-address mode 16-bit DOS program for CRC-32 calculation.
1. Parse command line.
2. Open next file, exit program if none left.
3. Set CRC-32 to 0.
4. Read next block from file
5. If got end-of-file, output CRC-32, close file and go to 2.
6. Update CRC-32 with data from read block and go to 4.
Which step is difficult for you?
OK, let's outline real-address mode 16-bit DOS program for CRC-32 calculation.
1. Parse command line.
2. Open next file, exit program if none left.
3. Set CRC-32 to 0.
4. Read next block from file
5. If got end-of-file, output CRC-32, close file and go to 2.
6. Update CRC-32 with data from read block and go to 4.
Which step is difficult for you?
Why are we doing crc32 in a 16bit environment?
Why are we doing crc32 in a 16bit environment?
Why not? CRC32 just has a higher 'uniqueness factor' than CRC16.
It's not like 16-bit environments only allow you to do 16-bit processing :)
I'd ask why are we doing anything at all in 16-bit environment? Is there any particular reason for that?
Why dog licks it's cojones? Because it can! ;-)
Seriously, I dunno why skywalker tries to write this program. Not a big challenge, but can be educational (extend it for different polynomials/endianness/MDx/SHA-x/whatever).
Seriously, I dunno why skywalker tries to write this program. Not a big challenge, but can be educational (extend it for different polynomials/endianness/MDx/SHA-x/whatever).
Last time I wrote 16-bit x86 code was when I worked on a histogram utility for a Canon digital camera with a friend of mine.
Canon uses 80186-compatible embedded processors, running ROMDOS. Couldn't use 32-bit if I wanted to.
Canon uses 80186-compatible embedded processors, running ROMDOS. Couldn't use 32-bit if I wanted to.
Nothing stops you from using 32-bit code in 16-bit code segment.
Uhhh, a 80186 does :)
It's a 16-bit processor.
It's a 16-bit processor.
My fault. I thought about mixing 16/32-bit code in DOS programs on 386+.