never trust Pd.
Am 08.03.2012 um 16:23 schrieb Lorenzo Sutton:
Or, beware of trying to compare floats with [==] ...
Lorenzo. <funky_floats.pd>_______________________________________________ Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Thu, 2012-03-08 at 16:23 +0100, Lorenzo Sutton wrote:
Or, beware of trying to compare floats with [==] ...
Lorenzo.
That's a good example of the implications inherent in floats. What you call a work-around is actually the correct solution. When counting, make sure you count with something that can precisely represented by floats, otherwise the error will grow with each iteration. Integers up to 1.6*10^7 meet that criterion.
Roman
----- Original Message -----
From: Roman Haefeli reduzent@gmail.com To: pd-list@iem.at Cc: Sent: Thursday, March 8, 2012 1:52 PM Subject: Re: [PD] Some more float weirdness/fun
On Thu, 2012-03-08 at 16:23 +0100, Lorenzo Sutton wrote:
Or, beware of trying to compare floats with [==] ...
Lorenzo.
That's a good example of the implications inherent in floats. What you call a work-around is actually the correct solution. When counting, make sure you count with something that can precisely represented by floats, otherwise the error will grow with each iteration. Integers up to 1.6*10^7 meet that criterion.
Roman
Is this still an issue when float precision is 64-bit?
-Jonathan
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On 08.03.2012 20:47, Jonathan Wilkes wrote:
----- Original Message -----
From: Roman Haefeli reduzent@gmail.com To: pd-list@iem.at Cc: Sent: Thursday, March 8, 2012 1:52 PM Subject: Re: [PD] Some more float weirdness/fun
On Thu, 2012-03-08 at 16:23 +0100, Lorenzo Sutton wrote:
Or, beware of trying to compare floats with [==] ...
Lorenzo.
That's a good example of the implications inherent in floats. What you call a work-around is actually the correct solution. When counting, make sure you count with something that can precisely represented by floats, otherwise the error will grow with each iteration. Integers up to 1.6*10^7 meet that criterion.
Roman
Is this still an issue when float precision is 64-bit?
The issue will arise later, because you have two a many bits for representing your value, but the problem still exists.
As Pd is a programming language, this is good read on the issue: http://en.wikipedia.org/wiki/Floating_point#IEEE_754:_floating_point_in_mode... http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
or to make the wording quotable: https://twitter.com/#!/tomscott/status/174143430170120192
Best regards, Thomas
Le 2012-03-08 à 11:47:00, Jonathan Wilkes a écrit :
From: Roman Haefeli reduzent@gmail.com That's a good example of the implications inherent in floats. What you call a work-around is actually the correct solution. When counting, make sure you count with something that can precisely represented by floats, otherwise the error will grow with each iteration. Integers up to 1.6*10^7 meet that criterion.
Is this still an issue when float precision is 64-bit?
in float32 you have 24 significant bits. in float64 you have 53 significant bits.
This means that the limit is pushed back from 16777216 to 9007199254740992 instead.
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
On Thu, 2012-03-08 at 18:03 -0500, Mathieu Bouchard wrote:
Le 2012-03-08 à 11:47:00, Jonathan Wilkes a écrit :
From: Roman Haefeli reduzent@gmail.com That's a good example of the implications inherent in floats. What you call a work-around is actually the correct solution. When counting, make sure you count with something that can precisely represented by floats, otherwise the error will grow with each iteration. Integers up to 1.6*10^7 meet that criterion.
Is this still an issue when float precision is 64-bit?
in float32 you have 24 significant bits. in float64 you have 53 significant bits.
This means that the limit is pushed back from 16777216 to 9007199254740992 instead.
But 0.1 still cannot be represented exactly by float64, can it?
Roman
Le 2012-03-09 à 08:32:00, Roman Haefeli a écrit :
But 0.1 still cannot be represented exactly by float64, can it?
It can't. It also doesn't work for any other form of binary floating point. It's just that float64 is a lot closer to exact than float32 can be, and so on.
0.1 = 1/10 = 1/(2*5) in prime factors.
This means both 2 and 5 need to be present as prime factors in the base of the format, to have an exact fraction for it. So, decimal floats obviously can, and the only other bases that allow it are multiples of 10.
for 1/44100 = 1/(2*2*3*3*5*5*7*7), the smallest base to do it exactly is 2*3*5*7 = 210.
for 1/48000 = 1/(2*2*2*2*2*2*2*3*5*5*5), the smallest base to do it exactly is 2*3*5 = 30.
I'm just saying that as examples of the principle for exact fractions ; in practice, bases that aren't binary nor decimal are rarely ever used, and decimal floats are almost only used as textfile versions of binary floats (such as in the pd file format and most programming languages).
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
On 2012-03-09 02:32, Roman Haefeli wrote:
On Thu, 2012-03-08 at 18:03 -0500, Mathieu Bouchard wrote:
Le 2012-03-08 à 11:47:00, Jonathan Wilkes a écrit :
From: Roman Haefelireduzent@gmail.com That's a good example of the implications inherent in floats. What you call a work-around is actually the correct solution. When counting, make sure you count with something that can precisely represented by floats, otherwise the error will grow with each iteration. Integers up to 1.6*10^7 meet that criterion.
Is this still an issue when float precision is 64-bit?
in float32 you have 24 significant bits. in float64 you have 53 significant bits.
This means that the limit is pushed back from 16777216 to 9007199254740992 instead.
But 0.1 still cannot be represented exactly by float64, can it?
For any floatX unless X is infinity the number of floats that are not exactly represented is always infinite.
Martin
But 0.1 still cannot be represented exactly by float64, can it?
For any floatX unless X is infinity the number of floats that are not exactly represented is always infinite.
Martin
There is a countably infinite number of rational numbers and a uncountably infinite number of irrational numbers that cannot be represented.
We could also debate over whether infinity is exactly represented. When some math operation overflows (exceeds the range of floats), the result assigned is inf.
That's not the definition of infinity either: Take the set of real numbers R and the ordering operation <, then add an additional point "infinity" such that for any x belonging to R, x < infinity.
So, the inf in the float definition only represents "infinity" defined relative to the finitely countable set of numbers that can be represented as floats, not the actual infinity as represented in your head :)
Le 2012-03-09 à 09:39:00, Charles Henry a écrit :
Martin a écrit :
For any floatX unless X is infinity the number of floats that are not exactly represented is always infinite.
For a floatX format where X is the number of bits, every float is exact and there are at most pow(2,X) floats.
You mean that there are an infinity of numbers that round to a finite number of floats.
There is a countably infinite number of rational numbers and a uncountably infinite number of irrational numbers that cannot be represented.
From a constructivist point of view, there's a countably infinite number
of irrationals that can be represented at all no matter how. For a certain ontology useful to constructivism, it can be said that the uncountably many irrationals that are inexpressible also don't exist.
This leaves you with countably many rational numbers and countably many irrationals, that can't be represented in a finite format.
We could also debate over whether infinity is exactly represented. When some math operation overflows (exceeds the range of floats), the result assigned is inf.
Every float represents a range of numbers. The difference with infinities is that they represent half-intervals, that is, a line bounded only on one side.
That's not the definition of infinity either: Take the set of real numbers R and the ordering operation <, then add an additional point "infinity" such that for any x belonging to R, x < infinity.
You should know that there are several competing definitions of infinity for real numbers (not considering other number systems in which this definition doesn't work).
There are three definitions of Real numbers (R) in common use : one without any infinite number, one with two infinite numbers as endpoints, and one with a single infinite number without a sign. There are different motivations for the use of each of those three sets. There's no definition that fits all purposes, though the one without infinite numbers at all is considered generally «cleaner» in the field of pure math.
So, the inf in the float definition only represents "infinity" defined relative to the finitely countable set of numbers that can be represented as floats
Yes, except NaN.
You'll also find out that certain definitions of infinity that applies to the whole set of Reals also are relative to just that set, and don't work as-is for all possible extensions of Reals ; for example, Complex numbers don't have a single coherent definition of less-than and greater-than anymore, because all you can do is extract features of Complex numbers and compare those features as Reals... thus you need more specific definitions (and there are more possibilities of them).
not the actual infinity as represented in your head :)
How do you know what's in people's heads ?
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
It's well-known that floats can't be treated the same way as integers... but since PD is aimed at non-engineers and non-scientists I think it would be a good idea to implement the "good" comparison algorithms (i.e. checking against a threshold, etc) inside [==] and so, just to make patching easier. Maybe it's already supposed to behave this way...
As for the loss of integer precision issue, an object that detects "integer overflow" (that is, when all integer digits of the number cannot be represented) could be created, taking into account the floating point precision (32-bit, 64-bit...) and so.
2012/3/9 Mathieu Bouchard matju@artengine.ca
Le 2012-03-09 à 09:39:00, Charles Henry a écrit :
Martin a écrit :
For any floatX unless X is infinity the number of floats that are not
exactly represented is always infinite.
For a floatX format where X is the number of bits, every float is exact and there are at most pow(2,X) floats.
You mean that there are an infinity of numbers that round to a finite number of floats.
There is a countably infinite number of rational numbers and a
uncountably infinite number of irrational numbers that cannot be represented.
From a constructivist point of view, there's a countably infinite number of irrationals that can be represented at all no matter how. For a certain ontology useful to constructivism, it can be said that the uncountably many irrationals that are inexpressible also don't exist.
This leaves you with countably many rational numbers and countably many irrationals, that can't be represented in a finite format.
We could also debate over whether infinity is exactly represented.
When some math operation overflows (exceeds the range of floats), the result assigned is inf.
Every float represents a range of numbers. The difference with infinities is that they represent half-intervals, that is, a line bounded only on one side.
That's not the definition of infinity either: Take the set of real
numbers R and the ordering operation <, then add an additional point "infinity" such that for any x belonging to R, x < infinity.
You should know that there are several competing definitions of infinity for real numbers (not considering other number systems in which this definition doesn't work).
There are three definitions of Real numbers (R) in common use : one without any infinite number, one with two infinite numbers as endpoints, and one with a single infinite number without a sign. There are different motivations for the use of each of those three sets. There's no definition that fits all purposes, though the one without infinite numbers at all is considered generally «cleaner» in the field of pure math.
So, the inf in the float definition only represents "infinity" defined
relative to the finitely countable set of numbers that can be represented as floats
Yes, except NaN.
You'll also find out that certain definitions of infinity that applies to the whole set of Reals also are relative to just that set, and don't work as-is for all possible extensions of Reals ; for example, Complex numbers don't have a single coherent definition of less-than and greater-than anymore, because all you can do is extract features of Complex numbers and compare those features as Reals... thus you need more specific definitions (and there are more possibilities of them).
not the actual infinity as represented in your head :)
How do you know what's in people's heads ?
______________________________**______________________________** __________ | Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
From: Quim Llimona lemonzi42@gmail.com To: pd-list pd-list@iem.at Sent: Friday, March 9, 2012 1:18 PM Subject: Re: [PD] Some more float weirdness/fun
It's well-known that floats can't be treated the same way as integers... but since PD is aimed at non-engineers and non-scientists I think it would be a good idea to implement the "good" comparison algorithms (i.e. checking against a threshold, etc) inside [==] and so, just to make patching easier. Maybe it's already supposed to behave this way...
How often do these problems happen for people? Or maybe there is a better example than [==], which let's face it isn't going to be used anyway when you're checking against a threshold value.
-Jonathan
As for the loss of integer precision issue, an object that detects "integer overflow" (that is, when all integer digits of the number cannot be represented) could be created, taking into account the floating point precision (32-bit, 64-bit...) and so.
2012/3/9 Mathieu Bouchard matju@artengine.ca
Le 2012-03-09 à 09:39:00, Charles Henry a écrit :
Martin a écrit :
For any floatX unless X is infinity the number of floats that are not
exactly represented is always infinite.
For a floatX format where X is the number of bits, every float is exact and there are at most pow(2,X) floats.
You mean that there are an infinity of numbers that round to a finite number of floats.
There is a countably infinite number of rational numbers and a uncountably infinite number of irrational numbers that cannot be represented.
From a constructivist point of view, there's a countably infinite number of irrationals that can be represented at all no matter how. For a certain ontology useful to constructivism, it can be said that the uncountably many irrationals that are inexpressible also don't exist.
This leaves you with countably many rational numbers and countably many irrationals, that can't be represented in a finite format.
We could also debate over whether infinity is exactly represented.
When some math operation overflows (exceeds the range of floats), the result assigned is inf.
Every float represents a range of numbers. The difference with infinities is that they represent half-intervals, that is, a line bounded only on one side.
That's not the definition of infinity either: Take the set of real numbers R and the ordering operation <, then add an additional point "infinity" such that for any x belonging to R, x < infinity.
You should know that there are several competing definitions of infinity for real numbers (not considering other number systems in which this definition doesn't work).
There are three definitions of Real numbers (R) in common use : one without any infinite number, one with two infinite numbers as endpoints, and one with a single infinite number without a sign. There are different motivations for the use of each of those three sets. There's no definition that fits all purposes, though the one without infinite numbers at all is considered generally «cleaner» in the field of pure math.
So, the inf in the float definition only represents "infinity" defined relative to the finitely countable set of numbers that can be represented as floats
Yes, except NaN.
You'll also find out that certain definitions of infinity that applies to the whole set of Reals also are relative to just that set, and don't work as-is for all possible extensions of Reals ; for example, Complex numbers don't have a single coherent definition of less-than and greater-than anymore, because all you can do is extract features of Complex numbers and compare those features as Reals... thus you need more specific definitions (and there are more possibilities of them).
not the actual infinity as represented in your head :)
How do you know what's in people's heads ?
______________________________________________________________________ | Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC _______________________________________________ Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Le 2012-03-09 à 19:18:00, Quim Llimona a écrit :
It's well-known that floats can't be treated the same way as integers... but since PD is aimed at non-engineers and non-scientists I think it would be a good idea to implement the "good" comparison algorithms (i.e. checking against a threshold, etc) inside [==] and so, just to make patching easier.
Do you think it makes any sense to change the definition of [==] given the extent to which Pd has been used already ?
And then, [==] couldn't just try to be smart and pick a threshold for you. You need to explicitly tell which threshold you want it to use. Any kind of automatic threshold will end up being an annoyance, a nuisance or worse.
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
On 09/03/12 19:18, Quim Llimona wrote:
It's well-known that floats can't be treated the same way as integers... but since PD is aimed at non-engineers and non-scientists I think it would be a good idea to implement the "good" comparison algorithms (i.e. checking against a threshold, etc) inside [==] and so, just to make patching easier. Maybe it's already supposed to behave this way...
No, no...I don't agree (and hope initiating the thread didn't suggest this idea). [==] should be what it says exact comparison. Actually in the patch I was making it would have been simple to put in a [>=] instead which basically *is* a threshold... I just thought it would be nice to point out some float fun given all the discussions :)
Lorenzo.
On 3/10/12, Lorenzo Sutton lorenzofsutton@gmail.com wrote:
On 09/03/12 19:18, Quim Llimona wrote:
It's well-known that floats can't be treated the same way as integers... but since PD is aimed at non-engineers and non-scientists I think it would be a good idea to implement the "good" comparison algorithms (i.e. checking against a threshold, etc) inside [==] and so, just to make patching easier. Maybe it's already supposed to behave this way...
No, no...I don't agree (and hope initiating the thread didn't suggest this idea). [==] should be what it says exact comparison. Actually in the patch I was making it would have been simple to put in a [>=] instead which basically *is* a threshold... I just thought it would be nice to point out some float fun given all the discussions :)
Lorenzo.
How about an abstraction that uses <= against 2xepsilon*|input|? That would be a reliable automatic way to ignore single-bit rounding errors. I'm just using this page as a reference: http://en.wikipedia.org/wiki/Machine_epsilon There's two inputs, so I'd choose the largest of them and use that in calculating the threshold. The test patch below just takes a vertical slider from -1e-7 to 1e-7 and adds the value to 0.999 to be compared against. This patch needs a little improvement--it seems to be obeying a threshold which is a little closer to 8.9e-8 (which is greater than the 32-bit machine epsilon.... so it still does what is wanted with a little extra room)
#N canvas 86 491 682 612 10; #X obj 91 39 inlet; #X obj 137 39 inlet; #X obj 182 39 $1; #X obj 216 39 loadbang; #X obj 94 140 abs; #X obj 98 345 <=; #X obj 139 110 abs; #X obj 185 110 abs; #X obj 93 74 t f f; #X obj 138 74 t f f; #X obj 140 212 >=; #X obj 189 209 <; #X obj 139 145 f; #X obj 186 144 t b f; #X obj 202 244 *; #X obj 140 172 t f f; #X obj 140 244 *; #X obj 94 108 -; #X obj 140 276 +; #X obj 140 308 * 1.192e-07; #X obj 97 391 outlet; #X connect 0 0 8 0; #X connect 1 0 9 0; #X connect 2 0 17 1; #X connect 3 0 2 0; #X connect 4 0 5 0; #X connect 5 0 20 0; #X connect 6 0 12 0; #X connect 7 0 13 0; #X connect 8 0 17 0; #X connect 8 1 6 0; #X connect 9 0 17 1; #X connect 9 1 7 0; #X connect 10 0 16 0; #X connect 11 0 14 0; #X connect 12 0 15 0; #X connect 13 0 12 0; #X connect 13 1 10 1; #X connect 13 1 11 1; #X connect 13 1 14 1; #X connect 14 0 18 1; #X connect 15 0 10 0; #X connect 15 1 11 0; #X connect 15 1 16 1; #X connect 16 0 18 0; #X connect 17 0 4 0; #X connect 18 0 19 0; #X connect 19 0 5 1;
#N canvas 0 25 264 184 10; #X obj 118 79 nearly_equal 0.999; #X obj 38 27 vsl 15 128 -1e-07 1e-07 0 0 empty empty empty 0 -9 0 10 -262144 -1 -1 0 1; #X floatatom 115 135 5 0 0 0 - - -; #X obj 116 18 + 0.999; #X connect 0 0 2 0; #X connect 1 0 3 0; #X connect 3 0 0 0;
Le 2012-03-10 à 16:31:00, Charles Henry a écrit :
How about an abstraction that uses <= against 2xepsilon*|input|? That would be a reliable automatic way to ignore single-bit rounding errors.
Don't you wish rounding errors would be at most single-bit.
But as you compose more operations together, error can accumulate and/or amplify.
This is why no-one uses a definition of == with a hardcoded epsilon in it, in any programming language that I ever heard of.
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC