On Fri, 2008-01-18 at 18:48 +0100, Matteo Sisti Sette wrote:
That's not actually a correct implementation of spectral flatness. What you've done is more like the arithmetic mean of the log-magnitude spectrum over arithmetic mean of the mag spectrum. Whereas the SFM is the _geometric_ mean of the mag spectrum over over its arithmetic mean.
Isn't the arithmetic mean of the log-magnitude equivalent to the log of the geometric mean of the magnitude? so if you reverse the logarithm after calculating it, as I did in my attachment, you obtain the geometric mean of the magnitude.
Actually, you're right! ... and that's a cool technique!
I think this is the only way of calculating the geometric mean of such a large vector (or isn't it?), because actually computing the product would soon overflow the precision of a float giving either +INF or zero (even with no zero bin) (I had tried that first).
Also true. That's why libxtract uses double precision for the SFM calculation, and why flib is deprecated ;-)
I just tested libxtract spectral_flatness() against your abstraction, and the output is roughly the same. Even though it uses doubles, the lx version definitely loses some precision. It's approximately twice as fast though.
Also, I may be missing something, but I think your attached patch only calculates the product and sum of the LAST TWO bins. Note you use [fexpr~ $x[0]+$x[-1]] and [fexpr~ $x[0]*$x[-1]] where I would use: [fexpr~ $x[0]+$y[-1]] and [fexpr~ $x[0]*$y[-1]]
Erm... forget it. I never could the hang of fexpr~ ;-)
The problem is that in order to obtain the geometric mean, you need to ignore bins containing 0, otherwise you will get an overall value of 0 if there are '0 bins' present.
Yeah here you are right. I didn't take care of that. Well, no, wait... I did. Using [rmstodb~]-100 for the log calculation, a 0 bin would be clipped to a -100 log value, thus being computed as a very very small but nonzero bin. Not sure it is the most correct thing to do, but it kinda works.
I can't think of any major flaws in this method.
At any rate, both of the Pd implementations using [fexpr~] are horribly inefficient,
So horribly? I got a 4-5% CPU usage, which may be very much but I don't need to apply it to more than two or three signals.
Well if it works for you then that's fine. I don't believe in optimising things for speed just for the sake of it.
On my machine I get 5-8% load for your abstraction, and 2-4% load for xtract~, but if there's no problem, you certainly don't want to got the trouble of installing a library, and an external just for one feature!
That's why I recommended Irregularity. It tells you roughly the same thing, but is a nicer feature in terms of computation cost,
I'll try that out and make some comparisons.
However having a look at the formulas (if again i'm not missing something), it seems to me that it should have a similar computation cost: you still need to iterate over the vector, don't you?
Actually you're right.
I was thinking in terms of something that did:
gm = get_geometric_mean(...) am = get_arithmetic_mean(...)
sfm = 10 * log10(gm / am)
But libxtract doesn't do that. I just did a small benchmark, and sfm and irregularity_j are on a par, with irregularity_k about 30% slower.
Jamie