(sorry for the duplicate, Martin!)
The Box-Muller method (I've always thought it was just Ross who did this one) is a classic trick. It probably goes back even to Gauss--who knows and who cares
pdf of Gaussian:
f(x)= k * e^-(x^2/ (2*sigma^2)) k is a normalization constant--which we will determine
The trick is, you can't integrate f(x) directly. What we do is create a joint probability distribution of g(x,y)=f(x)*f(y). x and y are independently distributed.
Then, if we integrate g(x,y) on the whole plane, we get the square of the distribution, and we can find the square of the normalization constant.
g(x,y)=f(x)= k * e^-(x^2/ (2*sigma^2)) * k * e^-(y^2/ (2*sigma^2)) =k^2 * e^-(x^2 + y^2 / (2*sigma^2))
And it's simple to change to polar coordinates: x^2+y^2=r^2 and We won't even need to care about the rest.
integral( r=0 to inf, theta=0 to 2pi, g(r,theta)*r*dr ) <-the r*dr term makes this integrable =integral( r=0 to inf, theta=0 to 2pi, k^2 * e^-(r^2 / (2*sigma^2)) * r * dr dtheta )
Integrate by theta, and make the substitution u=r^2 / 2, du= r*dr
= 2*pi * k^2 * integral( u=0 to inf, e^-(u/sigma^2) du ) = - 2*pi * sigma^2 * k^2 * ( e^-(u/sigma^2) evaluated at u=inf and u=0) = 2*pi * sigma^2 * k^2
so, for normalization: 1 = 2*pi * sigma^2 * k^2 , k=1/sqrt(2*pi*sigma^2)
So, here's the method, and the reason it works.
In g(r, theta) , r and theta are independent. Theta doesn't even appear in the distribution. And this means that theta is uniformly distributed on [0,2*pi).
So, we can find g(r) independently by integrating with respect to theta.
we get g(r)= 1/sigma^2 * e^-(r^2/sigma^2)
Now, r^2 is exponentially distributed on 0 to inf. What we're going to do is create the cumultive distribution function of r^2 which takes values on [0,1]. Then, we can choose a value on [0,1] and find the corresponding value of r^2 (and inverse function!)
our cdf (cumulative dist function) G(Z)=P( Z<r^2 )= integral ( r^2 = 0 to Z, 1/sigma^2 * e^-(r^2/sigma^2) d(r^2)
G(Z)=1 - e^-(Z/sigma^2)
Now, we can just take sigma to be one, here and proceed with the following inversion. You'll come up with the same formula regardless.
Take U1=Z on [0,1] , take U2 on [0,1]
And we're going to take those values on the unit square, map them onto r^2 and theta, and then transform back to x and y, giving us two independently distributed normal variables.
U1=1 - e^-(Z/sigma^2)
e^-(Z/sigma^2) = 1 - U1
Z/sigma^2 = - ln (1-U1)
Z= r^2= -1*sigma^2 * ln (1-U1)
r = sqrt(-1*sigma^2 * ln (1-U1) ) r = sigma * sqrt(-1 * ln (1-U1) )
theta=2*pi*U2
x=r*cos(theta) = sigma * sqrt(-1 * ln (1-U1) ) * cos (2*pi*U2) y=r*sin(theta) = sigma * sqrt(-1 * ln (1-U1) ) * sin (2*pi*U2)
Notice, the sigma comes out in front, and the variable (1-U1) is distributed uniformly on [0,1] also. Hence, it can be simplified to another uniform variable U3, or whatever.
This is probably the *most* classic trick in statistics. Enjoy!
Chuck
On Sun, Mar 16, 2008 at 2:11 PM, Martin Peach martin.peach@sympatico.ca wrote:
Oh no that's wrong isn't it :( The log is necessary to keep the distribution normal, and the range is going to get wider the closer to zero the radius is allowed to get. The attached patch has a scale adjustment... Still I wonder what kind of distribution gaussianoise2 gives, it's not just white.
Martin