Hi,
Jamie Bullock wrote:
I find spectral irregularity to be quite a good noisiness metric, [...] Krimphoff: Irregularity = \sum_{k=2}^{N-1} |a_k - \frac{a_{k-1} + a_k + a_{k+1}}{3}| Jensen: Irregularity = \frac{\sum_{k=1}^{N} (a_k - a_{k+1})} {\sum_{k=1}^Na_k^2} [...] Where a_k is the amplitude of the kth coefficient in the magnitude spectrum.
I googled a little bit, and as far as I understand, these definitions seem to apply to an harmonic sound, and a_k seems to be the amplitude of the k-th partial... that is the sound is supposed to be pitch and its spectral peaks (partials) have already been extracted...
Would it make any sense to apply the above formulas using just the magnitude spectrum coefficients of the whole spectrum as a_k?? If so, I'm not sure whether a high irregularity should be expected to correspond to a high or low noisiness.....
Hi,
On Fri, 2008-01-18 at 14:04 +0100, matteo sisti sette wrote:
Jamie Bullock wrote:
I find spectral irregularity to be quite a good noisiness metric, [...] Krimphoff: Irregularity = \sum_{k=2}^{N-1} |a_k - \frac{a_{k-1} + a_k + a_{k+1}}{3}| Jensen: Irregularity = \frac{\sum_{k=1}^{N} (a_k - a_{k+1})} {\sum_{k=1}^Na_k^2} [...] Where a_k is the amplitude of the kth coefficient in the magnitude spectrum.
I googled a little bit, and as far as I understand, these definitions seem to apply to an harmonic sound, and a_k seems to be the amplitude of the k-th partial... that is the sound is supposed to be pitch and its spectral peaks (partials) have already been extracted...
The irregularity metric is generally computed on the magnitude spectrum, but there is no reason not use it on the harmonic spectrum, as some authors do. It is just giving you a measure of the 'jaggedness' of a given sequence of numbers, you could use it on any distribution, e.g. you could take the irregularity measure of the numbers of Pd mailing list postings over a 12 month period!
For the purposes of computing noise content it makes sense to use the mag spectrum. Some authors e.g. Park (2004) use the log magnitude spectrum.
Would it make any sense to apply the above formulas using just the magnitude spectrum coefficients of the whole spectrum as a_k?? If so, I'm not sure whether a high irregularity should be expected to correspond to a high or low noisiness.....
It depends which formula you use. Using Krimphoff ('Irregularity I' in libxtract), a high irregularity value corresponds to high noise content. The relationship is approximately inverted if you use Jensen.
I performed an analysis in Sonic Visualiser using a sound that contains a linear crossfade between a 440Hz sine wave and white noise. The results can be found at:
http://www.postlude.co.uk/incoming/sine-noise/sine-noise.png
The red line shows Irregularity calculated via Krimphoff's method, the blue line shows Jensen.
The audio file I used is at:
http://www.postlude.co.uk/incoming/sine-noise/sine-noise.wav
Jamie
Thank you very much :)
By the way, I have just tried Spectral Flatness (as defined in Wikipedia) and it seems promising.
I attach an abstraction that outputs spectral flatness in PD-vanilla.
2008/1/18, Jamie Bullock jamie@postlude.co.uk:
Hi,
On Fri, 2008-01-18 at 14:04 +0100, matteo sisti sette wrote:
Jamie Bullock wrote:
I find spectral irregularity to be quite a good noisiness metric, [...] Krimphoff: Irregularity = \sum_{k=2}^{N-1} |a_k - \frac{a_{k-1} + a_k + a_{k+1}}{3}| Jensen: Irregularity = \frac{\sum_{k=1}^{N} (a_k - a_{k+1})} {\sum_{k=1}^Na_k^2} [...] Where a_k is the amplitude of the kth coefficient in the magnitude spectrum.
I googled a little bit, and as far as I understand, these definitions seem to apply to an harmonic sound, and a_k seems to be the amplitude of the k-th partial... that is the sound is supposed to be pitch and its spectral peaks (partials) have already been extracted...
The irregularity metric is generally computed on the magnitude spectrum, but there is no reason not use it on the harmonic spectrum, as some authors do. It is just giving you a measure of the 'jaggedness' of a given sequence of numbers, you could use it on any distribution, e.g. you could take the irregularity measure of the numbers of Pd mailing list postings over a 12 month period!
For the purposes of computing noise content it makes sense to use the mag spectrum. Some authors e.g. Park (2004) use the log magnitude spectrum.
Would it make any sense to apply the above formulas using just the magnitude spectrum coefficients of the whole spectrum as a_k?? If so, I'm not sure whether a high irregularity should be expected to correspond to a high or low noisiness.....
It depends which formula you use. Using Krimphoff ('Irregularity I' in libxtract), a high irregularity value corresponds to high noise content. The relationship is approximately inverted if you use Jensen.
I performed an analysis in Sonic Visualiser using a sound that contains a linear crossfade between a 440Hz sine wave and white noise. The results can be found at:
http://www.postlude.co.uk/incoming/sine-noise/sine-noise.png
The red line shows Irregularity calculated via Krimphoff's method, the blue line shows Jensen.
The audio file I used is at:
http://www.postlude.co.uk/incoming/sine-noise/sine-noise.wav
Jamie
-- www.postlude.co.uk
On Fri, 2008-01-18 at 15:47 +0100, matteo sisti sette wrote:
Thank you very much :)
By the way, I have just tried Spectral Flatness (as defined in Wikipedia) and it seems promising.
I attach an abstraction that outputs spectral flatness in PD-vanilla.
That's not actually a correct implementation of spectral flatness. What you've done is more like the arithmetic mean of the log-magnitude spectrum over arithmetic mean of the mag spectrum. Whereas the SFM is the _geometric_ mean of the mag spectrum over over its arithmetic mean.
The problem is that in order to obtain the geometric mean, you need to ignore bins containing 0, otherwise you will get an overall value of 0 if there are '0 bins' present. See attached for a more 'correct' implementation.
There is also an external that implements SFM in the CVS:
http://pure-data.cvs.sourceforge.net/pure-data/externals/postlude/flib/src/
See sfm~.c
However, the flib library in general is deprecated, and libxtract should be used instead.
At any rate, both of the Pd implementations using [fexpr~] are horribly inefficient, and SFM is inefficient to compute at the best of times because it requires at least two iterations over the the input vector. That's why I recommended Irregularity. It tells you roughly the same thing, but is a nicer feature in terms of computation cost, and things like avoiding NaNs and infs.
Jamie
That's not actually a correct implementation of spectral flatness. What you've done is more like the arithmetic mean of the log-magnitude spectrum over arithmetic mean of the mag spectrum. Whereas the SFM is the _geometric_ mean of the mag spectrum over over its arithmetic mean.
Isn't the arithmetic mean of the log-magnitude equivalent to the log of the geometric mean of the magnitude? so if you reverse the logarithm after calculating it, as I did in my attachment, you obtain the geometric mean of the magnitude. I think this is the only way of calculating the geometric mean of such a large vector (or isn't it?), because actually computing the product would soon overflow the precision of a float giving either +INF or zero (even with no zero bin) (I had tried that first).
Also, I may be missing something, but I think your attached patch only calculates the product and sum of the LAST TWO bins. Note you use [fexpr~ $x[0]+$x[-1]] and [fexpr~ $x[0]*$x[-1]] where I would use: [fexpr~ $x[0]+$y[-1]] and [fexpr~ $x[0]*$y[-1]]
The problem is that in order to obtain the geometric mean, you need to ignore bins containing 0, otherwise you will get an overall value of 0 if there are '0 bins' present.
Yeah here you are right. I didn't take care of that. Well, no, wait... I did. Using [rmstodb~]-100 for the log calculation, a 0 bin would be clipped to a -100 log value, thus being computed as a very very small but nonzero bin. Not sure it is the most correct thing to do, but it kinda works.
At any rate, both of the Pd implementations using [fexpr~] are horribly inefficient,
So horribly? I got a 4-5% CPU usage, which may be very much but I don't need to apply it to more than two or three signals.
That's why I recommended Irregularity. It tells you roughly the same thing, but is a nicer feature in terms of computation cost,
I'll try that out and make some comparisons.
However having a look at the formulas (if again i'm not missing something), it seems to me that it should have a similar computation cost: you still need to iterate over the vector, don't you?
Thanks, m.
On Fri, 2008-01-18 at 18:48 +0100, Matteo Sisti Sette wrote:
That's not actually a correct implementation of spectral flatness. What you've done is more like the arithmetic mean of the log-magnitude spectrum over arithmetic mean of the mag spectrum. Whereas the SFM is the _geometric_ mean of the mag spectrum over over its arithmetic mean.
Isn't the arithmetic mean of the log-magnitude equivalent to the log of the geometric mean of the magnitude? so if you reverse the logarithm after calculating it, as I did in my attachment, you obtain the geometric mean of the magnitude.
Actually, you're right! ... and that's a cool technique!
I think this is the only way of calculating the geometric mean of such a large vector (or isn't it?), because actually computing the product would soon overflow the precision of a float giving either +INF or zero (even with no zero bin) (I had tried that first).
Also true. That's why libxtract uses double precision for the SFM calculation, and why flib is deprecated ;-)
I just tested libxtract spectral_flatness() against your abstraction, and the output is roughly the same. Even though it uses doubles, the lx version definitely loses some precision. It's approximately twice as fast though.
Also, I may be missing something, but I think your attached patch only calculates the product and sum of the LAST TWO bins. Note you use [fexpr~ $x[0]+$x[-1]] and [fexpr~ $x[0]*$x[-1]] where I would use: [fexpr~ $x[0]+$y[-1]] and [fexpr~ $x[0]*$y[-1]]
Erm... forget it. I never could the hang of fexpr~ ;-)
The problem is that in order to obtain the geometric mean, you need to ignore bins containing 0, otherwise you will get an overall value of 0 if there are '0 bins' present.
Yeah here you are right. I didn't take care of that. Well, no, wait... I did. Using [rmstodb~]-100 for the log calculation, a 0 bin would be clipped to a -100 log value, thus being computed as a very very small but nonzero bin. Not sure it is the most correct thing to do, but it kinda works.
I can't think of any major flaws in this method.
At any rate, both of the Pd implementations using [fexpr~] are horribly inefficient,
So horribly? I got a 4-5% CPU usage, which may be very much but I don't need to apply it to more than two or three signals.
Well if it works for you then that's fine. I don't believe in optimising things for speed just for the sake of it.
On my machine I get 5-8% load for your abstraction, and 2-4% load for xtract~, but if there's no problem, you certainly don't want to got the trouble of installing a library, and an external just for one feature!
That's why I recommended Irregularity. It tells you roughly the same thing, but is a nicer feature in terms of computation cost,
I'll try that out and make some comparisons.
However having a look at the formulas (if again i'm not missing something), it seems to me that it should have a similar computation cost: you still need to iterate over the vector, don't you?
Actually you're right.
I was thinking in terms of something that did:
gm = get_geometric_mean(...) am = get_arithmetic_mean(...)
sfm = 10 * log10(gm / am)
But libxtract doesn't do that. I just did a small benchmark, and sfm and irregularity_j are on a par, with irregularity_k about 30% slower.
Jamie
On Jan 18, 2008 11:48 AM, Matteo Sisti Sette matteosistisette@gmail.com wrote:
Also, I may be missing something, but I think your attached patch only calculates the product and sum of the LAST TWO bins. Note you use [fexpr~ $x[0]+$x[-1]] and [fexpr~ $x[0]*$x[-1]] where I would use: [fexpr~ $x[0]+$y[-1]] and [fexpr~ $x[0]*$y[-1]]
I don't really know what you two are talking about, but I know what this means.
[fexpr~ $x[0]+$x[-1]] and [fexpr~ $x[0]*$x[-1]]
evaluates on every pair of samples. You get an output vector that looks like: (x[0]+x[-1], x[1]+x[0], x[2]+x[1], ... , x[N-1] + x[N-2] ) or (x[0]*x[-1], x[1]*x[0], x[2]*x[1], ... , x[N-1] * x[N-2] )
Whereas,
[fexpr~ $x[0]+$y[-1]] and [fexpr~ $x[0]*$y[-1]]
is an accumulator!!! It will just keep growing and growing. Consider the equations in the following way:
[fexpr~ $x[0]+$y[-1]] means y[n] = y[n-1] + x[n] for all n
you can expand this by substitution: y[n] = y[n-2] + x[n-1] + x[n] y[n] = y[n-3] + x[n-2] + x[n-1] + x[n] and so on....
suppose we add up terms between arbitrary indexes a and b (could be more than one block) y[b]=sum( i = a to b, x[i]) + y[a]
likewise
[fexpr~ $x[0]*$x[-1]] means y[n] = y[n-1]*x[n]
y[b]=product( i = a to b, x[i]) * y[a]
if y is ever zero, it will always be zero after that...so, to use this, you would have to seed the values of y using the set y1 command. I always check this bookmark for reference because I need it all the time! http://crca.ucsd.edu/~syadegar/expr.html
Chuck