Hi Jamie-
Thanks, I would love to see the abstractions, as I was sort of shooting in the dark for a while implementing even the relatively simple spectral centroid. I originally thought I should be able to do the computation all in the signal domain from the [rfft~] object's outputs....is that possible? What I ended up doing is writing each analysis frame into an array, then using [bang~] to trigger the calculation for each frame, reading through the array.
Well, if you could package those up it would be immensely appreciated!
Jacob
On 7/15/05, Jamie Bullock jamie@postlude.co.uk wrote:
Hi Jacob,
A lot of work has been done on this, particularly in the field of MIR (Musical Information Retrival). One method is to treat the frequency spectrum as a statistical distribution, and then extract various characteristics of the distribution. These can include:
Mean: the arithmetic average Variance: the spectral 'spread' about the mean Deviaton: the square root of the variance Skewness: a measure of asymmetry around the mean Kurtosis: A measure of the relative spectral peakedness Irregularity: A measure of the jaggedness of the spectrum
There are many others including Tristimulus and Inharmonicity, but I can't remember the definitions off-hand. A Google for any of the above should give you the formulae.
I have a set of abstractions that implement some of the above. I'll package them up and make them available within the next couple of days.
Regards,
Jamie
On Fri, 2005-07-15 at 10:44 -0400, Jacob Last wrote:
Hi all--
I'm currently working on a project for controlling my granular synth patch using a control stream derived from various feature extractions from an input signal (soundfile or live). Currently what I'm working with is the spectral centroid, which I've implemented as a PD patch.
I'm wonding if people have other ideas for perceptual features that can be reliably extracted from an audio stream using the FFT or otherwise. The input is not necessarily pitched (might eventually want to analyze its own granular output as well, in addition to noisy and unpitched sounds) so I'm not that interested in pitch tracking, etc. rather more high level sonic features. For example, how might I detect a period of sharp attacks on a wind instrument? Maybe by using some sort of peak threshold with the spectral centroid? "Smoothness" of a sound? Continuum of pitched to unpitched? Etc.
Also if there are any PD implementations (abstractions or externals) of this sort of stuff please inform me; I haven't found anything as of yet.
I'd very much appreciate any input!
Best, Jacob
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list