Charles Henry wrote:
On 8/20/06, Mathieu Bouchard matju@artengine.ca wrote:
If a signal is random then how can I expect any part of it to be similar to any other part of it?
And then, if I look at more general tendencies using bigger signal blocks or averaging a lot of signal blocks together, how do I NOT approach a theoretical model from probability theory?
... The auto-covariance is a measure of how the function correlates with itself with different amounts of lag. For different values of time, s and t, the auto-covariance of this type of "random" signal is |t-s|^(2h). So, in the time domain, it is correlated with itself by using a probabilistic interpretation. I may be wrong....I had substantial difficulty with the subject, and I'm certainly interested in finding out how this stuff really works.
Yes, it's not instantaneously self-similar, it's statistically self-similar and the autocorrelation and autocovariance both increase as a increases from zero in 1/(f^a). When a = 0 you have white noise and the signal is randomly correlated with itself, but as you add low-frequency power to the signal, the correlation at any given lag will have to increase because of the low frequencies present. Beyond red noise there is also 'long-tailed' noise (for example: http://www.pnas.org/cgi/content/full/102/13/4771 or the stock market) where there is more correlation than expected at greater lag, or clumping or non-gaussian distribution of values. This doesn't seem to be covered by the 1/(f^a) concept.
Martin