In Tutor 5 - Opcodes of Win32Asm Tutorial, I have some problems to understand the "little endian" format.
Could you explain what actually happens in the memory and registers in the following 2 examples which I created & tried to understand more about little endian  :D ?

mov dword ptr [0000003Ah], 725E7A25h
add dword ptr [0000003Ah], 1
mov eax, dword ptr [0000003Ah]


mov dword ptr [0000003Ah], 725E7A25h
inc dword ptr [0000003Ah]
mov eax, dword ptr [0000003Ah]

Posted on 2011-11-05 02:38:50 by bolzano_1989
endian just states how bytes are stored within a larger data format. So a word (two bytes) and a double word (four bytes) Store and read these values Byte by Byte.

if you were to write a "double word" (4 bytes/ 32 bits) in binary the first bit to the left is the "most significant" bit and the first bit to the right is the "least Significant". In terms of bytes then the first byte is the most significant.

Big edian / network byte order stores the first byte (most significant) first where as little edian stores the last (least Significant) byte first. if you Store a 32 bit value as big edian first and then read it back in little edian, then what was originally the first byte is now the last byte

if you read and write in the same format you dont really need to worry (i.e. on the same machine) but when you are exchanging data between two different platforms you need to convert between different edians. Across the Internet this is standardised into big edian, aka network byte order. 

Little edian machines Store the least Significant byte first so it appears as though data stored backwards

Posted on 2011-11-05 10:02:39 by lukus001

Across the Internet this is standardised into big edian, aka network byte order.


For the IP protocol itself yes... but many protocols on top of that are developed on/for x86, and as such are little endian, to confuse matters even more.
Posted on 2011-11-05 17:18:54 by Scali


Across the Internet this is standardised into big edian, aka network byte order.


For the IP protocol itself yes... but many protocols on top of that are developed on/for x86, and as such are little endian, to confuse matters even more.
there's also bit ordering too ;)
Posted on 2011-11-06 05:53:18 by lukus001



Across the Internet this is standardised into big edian, aka network byte order.


For the IP protocol itself yes... but many protocols on top of that are developed on/for x86, and as such are little endian, to confuse matters even more.
there's also bit ordering too ;)


Oh yea... tell me about it :P
I had developed a bitstream reader for JPG/MPG style streams (the 'normal' way to store bits, as far as I'm concerned).
When I wrote a GIF decoder, I just grabbed the JPG/MPG reader, and it just didn't work. Then when I read the GIF specs more closely, I noticed they packed the bits into bytes in 'reverse' order compared to JPG/MPG.
Posted on 2011-11-06 11:41:57 by Scali