hi,

because of the problems i had with calculations using floating point math,

and following Roman's advice, i changed to integer math.


however that's easier said then done.

i'm running again into an unexpected limitation:

32-bits can represent signed integers upto 2.147...billion.

however, as soon as a number is greater then binary 27 bits the last byte stays 0.( after 134217727 )

e.g. 134200000 + 25000 = 134224992 (should be 134225000).


what am i missing?


rolf