unsigned and signed numbers have seperate intructions for multiplication and division

unsigned and signed both use the same addition and subtraction instruction

^

how does the computer know whether a number is signed or not?? does it matter?

also when i try and add using 2's complement i get the incorrect value? why is this!? and how does the cpu know

Human Addition with signed numbers

-9

+ -4

-------

-13

CPU addition with signed numbers using 2's complement to get the negative value of both numbers

-9

+ -4

-------

3 <-- wrong value!

also one last thing! 2's complement isnt another form of a number for example. intel pc's use 2's complement for their numbers? <-- this statement would be false because they just use regular binary....2's complement is only use when u need to show signed values!

unsigned and signed both use the same addition and subtraction instruction

^

how does the computer know whether a number is signed or not?? does it matter?

also when i try and add using 2's complement i get the incorrect value? why is this!? and how does the cpu know

Human Addition with signed numbers

-9

+ -4

-------

-13

CPU addition with signed numbers using 2's complement to get the negative value of both numbers

-9

+ -4

-------

3 <-- wrong value!

also one last thing! 2's complement isnt another form of a number for example. intel pc's use 2's complement for their numbers? <-- this statement would be false because they just use regular binary....2's complement is only use when u need to show signed values!

Not to be terse, however, simply google for

**binary addition and subtraction**or

**twos complement**

You will find a wealth of information on this topic that explains it in thorough detail...

**dougfunny**,

If you'll follow

**p1ranha**'s advice, you'll probably learn about another kinds of signed number's binary representations, like one's complement or sign-and-magnitude. You may even stumble upon balanced ternary or another useful-but-weird representations as well. ;)

-9+(-4)=3 because overflow happens and the result is wrong and out of range. Each architecture defines certain range for signed operations.

You can use some add/subtract instruction and consider its operands as unsigned. Besides CPU defines some extra flag bits which are automatically set or reset and if a programmer considers operands as signed numbers he can look at those bits and interpret the result by them. On the contrary if the programmer considers the numbers unsigned, the result maybe OK and he does not need to look at those extra bits like sign bit and overflow bit.

Anyway a programmer should know that there is a range for add/subtract/multiply/divide for signed and also for unsigned numbers. This range is determined by the size of the registers he is working with.

You can use some add/subtract instruction and consider its operands as unsigned. Besides CPU defines some extra flag bits which are automatically set or reset and if a programmer considers operands as signed numbers he can look at those bits and interpret the result by them. On the contrary if the programmer considers the numbers unsigned, the result maybe OK and he does not need to look at those extra bits like sign bit and overflow bit.

Anyway a programmer should know that there is a range for add/subtract/multiply/divide for signed and also for unsigned numbers. This range is determined by the size of the registers he is working with.

How does -9 + (-4) get to be out of range? How are we determining that the result is "wrong"? Works for me!

Best,

Frank

Best,

Frank

global _start

extern printf

section .data

fmt db `%d\n`, 0

section .text

_start:

nop

mov eax, -9

mov ebx, -4

add eax, ebx

push eax

push fmt

call printf

add esp, 4 * 2

mov eax, 1

int 80h