Hi,
have some1 ever tried to do speaker recognition on *nix in any kind of software ? Im trying to find out if its possible to find at least rudimentary-working software that would be able to send data to pd ?
The task would be to identify from a live-talk the voice of the current speaker amongst several. Training before is also possible .. i guess this could be done for sure by utilizing a simple neural network trained on a FFT docemposition of the voices.. so there must be some software out for sure...
Any ideas ?
regards,
gnd/
Le 2011-09-22 à 19:42:00, gnd@itchybit.org a écrit :
The task would be to identify from a live-talk the voice of the current speaker amongst several. Training before is also possible .. i guess this could be done for sure by utilizing a simple neural network trained on a FFT docemposition of the voices.. so there must be some software out for sure...
If I recall correctly, it's better to find the log of the amplitude of the fft, and then perhaps do fft again, before trying to find such timbral info.
an amplitude-wise log means that the spectra of filters add up instead of multiplying. That's supposed to make them easier to separate.
and the 2nd fft is supposed to make it easier to separate the vowel filters from the base pitch.
but I never tried any of that, or maybe I tried making a patch and then I didn't really knew how I'd use that and gave up... something like that.
| Mathieu Bouchard ---- tél: +1.514.383.3801 ---- Villeray, Montréal, QC
On Thu, Sep 22, 2011 at 12:42 PM, gnd@itchybit.org wrote:
The task would be to identify from a live-talk the voice of the current speaker amongst several. Training before is also possible .. i guess this could be done for sure by utilizing a simple neural network trained on a FFT docemposition of the voices.. so there must be some software out for sure...
Something tells me a fft+neural network would be really bad at this. Seriously, that sounds like a doomed project if you tried. These things would be huge:
How about autocovariance and dot-product?
Ahead of time, create an array containing normalized autocovariance (an autocorrelation) of the speaker's voice.
Compute a running autocovariance of the sound. Decompose it into the portion of the sound matching the autocovariance of the speaker and compare it with the part not matching the speaker (via dot-product, or projection operators)
That would be ~less~ expensive and time consuming than neural networks, but I'd give it not much chance of success either. Probably it would match quite a few different people all the same.
Chuck
On Sep 22, 2011, at 4:13 PM, Charles Henry wrote:
On Thu, Sep 22, 2011 at 12:42 PM, gnd@itchybit.org wrote:
The task would be to identify from a live-talk the voice of the
current speaker amongst several. Training before is also possible .. i
guess this could be done for sure by utilizing a simple neural network trained
on a FFT docemposition of the voices.. so there must be some software
out for sure...Something tells me a fft+neural network would be really bad at this. Seriously, that sounds like a doomed project if you tried. These things would be huge:
- fft size (for resolution)
- network size (based on the fft size)
- training set (lots of variance in the speaker is possible)
How about autocovariance and dot-product?
Ahead of time, create an array containing normalized autocovariance (an autocorrelation) of the speaker's voice.
Compute a running autocovariance of the sound. Decompose it into the portion of the sound matching the autocovariance of the speaker and compare it with the part not matching the speaker (via dot-product, or projection operators)
That would be ~less~ expensive and time consuming than neural networks, but I'd give it not much chance of success either. Probably it would match quite a few different people all the same.
I think that getting some kind of basic recognition of who is speaking
would not be super difficult, if you have a clean recording of the
voices. You need to get the formant of the voice, then use that as the
base comparison. You could start with something like William Brent's
timbreID library to isolate the different vowel sounds, then get a
format for each of the vowels, then use that data for the pattern
matching. It'll definitely take some research and a solid chunk of
work to get it going.
.hc
Access to computers should be unlimited and total. - the hacker ethic
I did research for a year on how to do this. I came to write externals for PD because of that project, but I never quite got to the point where I could do it. It's on my long to-do list, which means it probably never will be finished. Here are some ideas:
I was trying to port the formant modelling tools from the Speech Filing System from UCL: http://www.phon.ucl.ac.uk/resource/sfs/%C2%A0to PD in 2005-06, but didn't get much support from my superiors who were running this project. I never got it to work, but i'd only just begun proper C programming then. I'm sure I wasn't far off... I'd love to try again if I get time in my schedule (I now have 2 kids and 5 jobs). The advantages to this method are that with careful measurement of the residual spectrum, it is possible to re-create the sound of a voice from a good formant/residual model. Thus, we can make a person's voice "speak" the words we want them to, or the get a hundred people to sing in tune! It is a reversible algorithm, so the original sound can be re-created from the analysis.
The biggest problem with all of this is that speech is identified not just by its instantaneous timbre, but also by the way the timbre and pitch changes over time. So speech recognitpion technology uses a thing called a Markov Model to map the likelihood of one timbre changing to another. For example, the likelihood of a "k" sound followed by a "r" is quite high, since there are many words like "cracker, croak" that have this morphology. Whereas "k" followed by "s" is much rare in (English) language, so its likelihood is much less.
I...well there it is, Ed
The task would be to identify from a live-talk the voice of the current
speaker amongst several. Training before is also possible .. i guess this could be done for sure by utilizing a simple neural network trained on a FFT docemposition of the voices.. so there must be some software out for sure...
Something tells me a fft+neural network would be really bad at this. Seriously, that sounds like a doomed project if you tried. These things would be huge: 1. fft size (for resolution) 2. network size (based on the fft size) 3. training set (lots of variance in the speaker is possible)
How about autocovariance and dot-product?
Ahead of time, create an array containing normalized autocovariance (an autocorrelation) of the speaker's voice.
Compute a running autocovariance of the sound. Decompose it into the portion of the sound matching the autocovariance of the speaker and compare it with the part not matching the speaker (via dot-product, or projection operators)
That would be ~less~ expensive and time consuming than neural networks, but I'd give it not much chance of success either. Probably it would match quite a few different people all the same.
I think that getting some kind of basic recognition of who is speaking would not be super difficult, if you have a clean recording of the voices. You need to get the formant of the voice, then use that as the base comparison. You could start with something like William Brent's timbreID library to isolate the different vowel sounds, then get a format for each of the vowels, then use that data for the pattern matching. It'll definitely take some research and a solid chunk of work to get it going.
.hc
Access to computers should be unlimited and total. - the hacker ethic
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
- The Mel-Frequency Cepstral Coefficient (MFCC) of the FFT (Fast Fourier
Transform) of a waveform is a good timbral identifier. William Brent's TimbreID objects are good instantaneous timbre identifiers using this principle, but to build up a sophisticated model of a human voice (robust enough for speaker ID) you need to work out how to build a database. For an instantaneous MFCC identifier using an internal database, check out Michael Casey's "soundspotter" PD external.
Aside from the different analysis objects like [mfcc~], there is an object in the timbreID library that makes it easy to build a training database and make comparisons on the fly. But like Ed and others are saying - the problem is how to interpret the stored data. I never dove into the voice recognition problem, but my understanding is also that the magic is in the transitions. timbreID will help you get all the data you need if you can go the Markov model route. On the other hand, if I were going to take a stab at a simplified system based on isolated sounds, in general I'd guess that features of pure vowels would be more helpful in distinguishing between different speakers than features of "sss" sounds or consonants.
Le 2011-09-27 à 00:00:00, William Brent a écrit :
On the other hand, if I were going to take a stab at a simplified system based on isolated sounds, in general I'd guess that features of pure vowels would be more helpful in distinguishing between different speakers than features of "sss" sounds or consonants.
Ekthept when thomeone thpeakth like thith, of courthe.
| Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
On Thu, Sep 22, 2011 at 07:42:54PM +0200, gnd@itchybit.org wrote:
have some1 ever tried to do speaker recognition on *nix in any kind of software ? Im trying to find out if its possible to find at least rudimentary-working software that would be able to send data to pd ?
The task would be to identify from a live-talk the voice of the current speaker amongst several. Training before is also possible .. i guess this could be done for sure by utilizing a simple neural network trained on a FFT docemposition of the voices.. so there must be some software out for sure...
You will probably need this: http://en.wikipedia.org/wiki/Mel-frequency_cepstrum
The problem you are describing is incredibly difficult.
Cheers,
Chris.
On Fri, Sep 23, 2011 at 09:33:59AM +0800, Chris McCormick wrote:
On Thu, Sep 22, 2011 at 07:42:54PM +0200, gnd@itchybit.org wrote:
The task would be to identify from a live-talk the voice of the current speaker amongst several. Training before is also possible .. i guess this could be done for sure by utilizing a simple neural network trained on a FFT docemposition of the voices.. so there must be some software out for sure...
You will probably need this: http://en.wikipedia.org/wiki/Mel-frequency_cepstrum
The problem you are describing is incredibly difficult.
I just realised that you are probably not talking about overlapping voices, which is orders of magnitude more difficult than sequential voices.
Cheers,
Chris.