and then changing 100000 to 0 makes the result converge to -691.2.
actually, afaik, [variance] doesn't have a bug by itself. The bug is in [mean_n], which displays similar behaviour in default mode.
Yes that was my thought as well because the variance abstraction looks correct.
The bug is because of algebraic assumptions that don't work with floats. With real numbers, a+b-a-b = 0, but with floats, a+b-a-b is only guaranteed to be a "small" number, that is, less than 2^24 times smaller than abs(a)+abs(b), or something like that. But 100000*100000 = 10000000000, and if you divide that by 2^24 = 16777216, you get about 596, which is an upper bound for the amount of error: so, the error is surely between -596 and +596.
I trust your math here but just notice that your example converges to -691. But if I understand you correctly 'filtering' the input data through [int] should make variance error free (we hope).
thanks Oded