I did research for a year on how to do this. I came to write externals for PD because of that project, but I never quite got to the point where I could do it. It's on my long to-do list, which means it probably never will be finished. Here are some ideas:
I was trying to port the formant modelling tools from the Speech Filing System from UCL: http://www.phon.ucl.ac.uk/resource/sfs/%C2%A0to PD in 2005-06, but didn't get much support from my superiors who were running this project. I never got it to work, but i'd only just begun proper C programming then. I'm sure I wasn't far off... I'd love to try again if I get time in my schedule (I now have 2 kids and 5 jobs). The advantages to this method are that with careful measurement of the residual spectrum, it is possible to re-create the sound of a voice from a good formant/residual model. Thus, we can make a person's voice "speak" the words we want them to, or the get a hundred people to sing in tune! It is a reversible algorithm, so the original sound can be re-created from the analysis.
The biggest problem with all of this is that speech is identified not just by its instantaneous timbre, but also by the way the timbre and pitch changes over time. So speech recognitpion technology uses a thing called a Markov Model to map the likelihood of one timbre changing to another. For example, the likelihood of a "k" sound followed by a "r" is quite high, since there are many words like "cracker, croak" that have this morphology. Whereas "k" followed by "s" is much rare in (English) language, so its likelihood is much less.
I...well there it is, Ed
The task would be to identify from a live-talk the voice of the current
speaker amongst several. Training before is also possible .. i guess this could be done for sure by utilizing a simple neural network trained on a FFT docemposition of the voices.. so there must be some software out for sure...
Something tells me a fft+neural network would be really bad at this. Seriously, that sounds like a doomed project if you tried. These things would be huge: 1. fft size (for resolution) 2. network size (based on the fft size) 3. training set (lots of variance in the speaker is possible)
How about autocovariance and dot-product?
Ahead of time, create an array containing normalized autocovariance (an autocorrelation) of the speaker's voice.
Compute a running autocovariance of the sound. Decompose it into the portion of the sound matching the autocovariance of the speaker and compare it with the part not matching the speaker (via dot-product, or projection operators)
That would be ~less~ expensive and time consuming than neural networks, but I'd give it not much chance of success either. Probably it would match quite a few different people all the same.
I think that getting some kind of basic recognition of who is speaking would not be super difficult, if you have a clean recording of the voices. You need to get the formant of the voice, then use that as the base comparison. You could start with something like William Brent's timbreID library to isolate the different vowel sounds, then get a format for each of the vowels, then use that data for the pattern matching. It'll definitely take some research and a solid chunk of work to get it going.
.hc
Access to computers should be unlimited and total. - the hacker ethic
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list