Le 2012-01-13 à 08:05:00, Mathieu Bouchard a écrit :
Neither binary nor decimal representation is more precise than the other, it's just that their precision never matches exactly. 20 bits is slightly less than 6 decimals, whereas 16 decimals is slightly more than 53 bits. Thus you can only preserve precision by using slightly too many digits, otherwise you'll have too few.
erratum : i meant that 20 bits is slightly more than 6 decimals.
20 bits = 6.0205... decimals 24 bits = 7.2247... decimals 53 bits = 15.9545... decimals
(And by «decimal» here, I mean all significant digits, not the other meaning of the word)
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC