hi everybody

i have a problem with c3 bit, zf flag and jz jump

i hope that someone can help me out with this problem

following code does not set the fpu c3 bit, or is not copied correctly to cpu flag register ???

it does not jump when fRotation and f180 are equal

the fcomi instruction gives the same result

jpam

i have a problem with c3 bit, zf flag and jz jump

i hope that someone can help me out with this problem

following code does not set the fpu c3 bit, or is not copied correctly to cpu flag register ???

`finit ; initaliaze fpu`

fld fRotation ; starts with 0.0 inc with 0.01

fcomp f180 ; 1.57 (90 degree)

fnstsw ax ; Store Status Word in ax

sahf ; Store AH Register

jz @NewFile@ ; Jump if Zero (ZF=1)

it does not jump when fRotation and f180 are equal

the fcomi instruction gives the same result

jpam

Your problem probably has to do with accumulation-of-error.

Floating point operations suffer from rounding. Therefore the result of a floating point operation is rarely *exactly* the result you expect, unlike integers.

Instead of doing a direct compare, you should generally use a small margin of error, commonly referred to as 'epsilon'.

Eg, instead of doing this:

if (x == 90.0)

You should do something like this:

if (abs(x - 90.0) < epsilon)

Where 'epsilon' is a very small number, depending on the expected precision. Eg: epsilon == 0.0001.

Floating point operations suffer from rounding. Therefore the result of a floating point operation is rarely *exactly* the result you expect, unlike integers.

Instead of doing a direct compare, you should generally use a small margin of error, commonly referred to as 'epsilon'.

Eg, instead of doing this:

if (x == 90.0)

You should do something like this:

if (abs(x - 90.0) < epsilon)

Where 'epsilon' is a very small number, depending on the expected precision. Eg: epsilon == 0.0001.

thanks scali

but when i load a float of 0.0

and i increase that float every frame with fadd 0.01

how does that look internaly in the fpu register ?

i know the fpu converts the float to a real10 for calculations

but a float of 0.01 should be the same in real4 and real10 ?

so my logical thinking would be , when the float reeds the 1.57 the fcomp instruction should set the c3 bit

i rewrote my code with a small margin, and it is working ok now

but when i load a float of 0.0

and i increase that float every frame with fadd 0.01

how does that look internaly in the fpu register ?

i know the fpu converts the float to a real10 for calculations

but a float of 0.01 should be the same in real4 and real10 ?

so my logical thinking would be , when the float reeds the 1.57 the fcomp instruction should set the c3 bit

i rewrote my code with a small margin, and it is working ok now

It's a bit difficult to explain how it looks.

But in short, a floating point number is composed something like this:

sign * (mantissa * radix^exponent)

Sign can be 1 or -1.

Mantissa is an integer number, the radix is the base of the number system you use (eg. decimal has radix 10, hexadecimal has radix 16, binary has radix 2 etc). The mantissa is always normalized, so basically between 0...1 range.

The exponent is well, the exponent. It is used as a scale factor, basically to move the decimal point for the mantissa forward or backward (hence the name 'floating point').

The radix is fixed, and in IEEE754-standardized floating point (such as x87), the radix is 2.

Only the mantissa and exponent are stored.

So for example, if you had a number such as -12.345, it would be stored in decimal as such:

radix = 10

sign = -1

mantissa = 0.12345

exponent = 2

So, take the above formula and you get:

(-1 * 0.12345) * 10^2 = -0.12345 * 100 = -12.345

The problem here lies in the fact that you only have limited precision for each component.

For single precision IEEE754 floats (32-bit), you get 1 sign bit, 23 mantissa bits and 8 exponent bits.

For double precision IEEE754 floats (64-bit), you get 1 sign bit, 52 mantissa bits and 11 exponent bits.

As a result, the mantissa needs to be rounded after it is normalized, which cuts off some of the least significant digits at the end. This is where errors will occur.

When you specify 0.01 as a decimal number, it will not be stored *exactly* as 0.01 in a binary float. It may actually be something like 0.0099999999 (decimal doesn't map 'nicely' to binary). So everytime you add it, you 'lose' a bit. That is why the comparison with an epsilon value is required. You factor in that you have a small error.

It is generally not advisable to repeatedly add floating point numbers anyway.

Instead, it is better to use an integer counter, and multiply by a float:

floatX = intCounter * floatStep;

This way you avoid the accumulation-of-error. The int counter is always exact, and you only get rounding once, after the multiply. You don't stack error upon error of previous iterations.

That is the short version, in reality there is some minor trickery applied when storing the mantissa (one bit is 'implied').

For more detailed information, I will direct you to: http://en.wikipedia.org/wiki/Floating_point

But in short, a floating point number is composed something like this:

sign * (mantissa * radix^exponent)

Sign can be 1 or -1.

Mantissa is an integer number, the radix is the base of the number system you use (eg. decimal has radix 10, hexadecimal has radix 16, binary has radix 2 etc). The mantissa is always normalized, so basically between 0...1 range.

The exponent is well, the exponent. It is used as a scale factor, basically to move the decimal point for the mantissa forward or backward (hence the name 'floating point').

The radix is fixed, and in IEEE754-standardized floating point (such as x87), the radix is 2.

Only the mantissa and exponent are stored.

So for example, if you had a number such as -12.345, it would be stored in decimal as such:

radix = 10

sign = -1

mantissa = 0.12345

exponent = 2

So, take the above formula and you get:

(-1 * 0.12345) * 10^2 = -0.12345 * 100 = -12.345

The problem here lies in the fact that you only have limited precision for each component.

For single precision IEEE754 floats (32-bit), you get 1 sign bit, 23 mantissa bits and 8 exponent bits.

For double precision IEEE754 floats (64-bit), you get 1 sign bit, 52 mantissa bits and 11 exponent bits.

As a result, the mantissa needs to be rounded after it is normalized, which cuts off some of the least significant digits at the end. This is where errors will occur.

When you specify 0.01 as a decimal number, it will not be stored *exactly* as 0.01 in a binary float. It may actually be something like 0.0099999999 (decimal doesn't map 'nicely' to binary). So everytime you add it, you 'lose' a bit. That is why the comparison with an epsilon value is required. You factor in that you have a small error.

It is generally not advisable to repeatedly add floating point numbers anyway.

Instead, it is better to use an integer counter, and multiply by a float:

floatX = intCounter * floatStep;

This way you avoid the accumulation-of-error. The int counter is always exact, and you only get rounding once, after the multiply. You don't stack error upon error of previous iterations.

That is the short version, in reality there is some minor trickery applied when storing the mantissa (one bit is 'implied').

For more detailed information, I will direct you to: http://en.wikipedia.org/wiki/Floating_point

wow thanks for the detailed information about the fpu scali !

never knew that the internal working of the fpu was that complicated

now i see why my zero jump wasn't working :)

never knew that the internal working of the fpu was that complicated

now i see why my zero jump wasn't working :)