On 8/20/06, Mathieu Bouchard matju@artengine.ca wrote:
If a signal is random then how can I expect any part of it to be similar to any other part of it?
And then, if I look at more general tendencies using bigger signal blocks or averaging a lot of signal blocks together, how do I NOT approach a theoretical model from probability theory?
If you average signal blocks together, they should approach zero. How not to approach it from probability? I have no idea.
The only reason I know jack about this problem is that I just had a course on fractional Brownian motion and stochastic calculus this year. And still.... it's a friggin devil to wrap your mind around :)
The way these noises are self similar is in terms of their auto-covariance. The different values of the Hurst exponent, h, determine the exponent, a, in 1/f^a noise. A single stochastic process unfolds in a single way (it's a single deterministic function) out of a large number of possiblities (one big function space).
A single fractional Brownian motion is a probabilistic function of time that has 0 mean (relative to the point we start at) at all points in time, and a variance of t^(2h).
The auto-covariance is a measure of how the function correlates with itself with different amounts of lag. For different values of time, s and t, the auto-covariance of this type of "random" signal is |t-s|^(2h). So, in the time domain, it is correlated with itself by using a probabilistic interpretation. I may be wrong....I had substantial difficulty with the subject, and I'm certainly interested in finding out how this stuff really works.
Chuck